Hi,
Following this tutorial, Using Azure FastAI I created this instance and everything seems to be working. Just that, training was awfully slow… Like really slow. Upon further investigating, I realised that Python(fastai) kernel was not using the GPU.
When I changed to another kernel, like Python 3.6 (AzureML), the above statement returned True.
Anyone knows, how I can activate the GPU in the kernel.
Actually looks like the latest version of Pytorch now also expects and installs a local cudatoolkit package. By default it is CUDA 10. On the DSVM (K80) looks like CUDA 9 only works. You can downgrade to the cudatoolkit 9.0 by running the command:
This is how I did it using a Linux Ubuntu VM for anyone who doesn’t want to create a new one with the fix and would prefer to fix it inline and is unfamiliar with how to connect to the machine / what is happening behind the scenes with Jupyter notebooks.
Open a terminal (cmd, powershell, git bash, etc)
ssh instead the machine with the command ssh <your username>@<ip address>
This is the same IP address you use to connect to the Jupyter notebook and same username.
Enter the same password you use to connect to the jupyter notebook
Run the command: conda activate fastai
Run the command provided by zenlytix: conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
If you have a Jupyter notebook running, restart the kernel (from the menu bar at the top)
Insert & run a cell at the top of your jupyter notebook to verify the installation worked import torch
`print(torch.cuda.is_available())``
This should return true and you can continue with the notebook as usual.