Fastai2 Conda env CUDA not available

Hello there,
I just installed fastai2 on a new GCP instance:

following the installation commands on dev.fast.ai

but strangely CUDA is available on the conda base env but not in the new fastai2 env.

no other package has been installed.
I see that the BASE env has cudatoolkit installed and not the fastai2 env, but it never was an issue before and always had access to my GPU from the fastai2 env.

Do you guys had this issue before?

Okay it seems that pytorch and torchvision versions were not aligned with cudatoolkit version. So I downgraded cudatoolkit to 9.0 with conda taking care of installing the proper pytorch/torchvision versions. Then CUDA was finally available.
The upgraded cudatoolkit to 10.1 with Conda taking care of pytorch/torchvision and now everything works fine.

Small tip: if you want to quickly check if CUDA is available from the CLI just type:
python -c “import torch; print(torch.cuda.is_available())”

I got similar result to cuda not available, installing fastai2 Fastai v2 chat what I did was go to pytorch page and copy the install command for my drivers https://pytorch.org/get-started/locally/

Could you give instructions on how you downgraded/upgraded cudatoolkit? Like what commands did you use in the terminal?
Also what does CLI stand for? I’m assuming it refers to the terminal but I’m not sure.