no other package has been installed.
I see that the BASE env has cudatoolkit installed and not the fastai2 env, but it never was an issue before and always had access to my GPU from the fastai2 env.
Okay it seems that pytorch and torchvision versions were not aligned with cudatoolkit version. So I downgraded cudatoolkit to 9.0 with conda taking care of installing the proper pytorch/torchvision versions. Then CUDA was finally available.
The upgraded cudatoolkit to 10.1 with Conda taking care of pytorch/torchvision and now everything works fine.
Small tip: if you want to quickly check if CUDA is available from the CLI just type:
python -c “import torch; print(torch.cuda.is_available())”
Could you give instructions on how you downgraded/upgraded cudatoolkit? Like what commands did you use in the terminal?
Also what does CLI stand for? I’m assuming it refers to the terminal but I’m not sure.