Desktop with Nvidia GPU vs Cloud service

Hi friends,

I already have a desktop with a modest Nvidia GPU installed and pytorch, torchvision, fastai installed. Can one of you point me to a quick sanity check command that I can use to check if all the required libraries are installed properly?


run the fastai notebook would be the best sanity check

Cool. Thanks! I installed the ones mentioned in the github page and now

from fastai import *
from import *

seem to go through fine in my local machine.


import fastai; fastai.show_install(1)

This will show you your environment. It’s a pretty good starting point.


awesome! works fine… shows my GPU version as well :slight_smile:

Cool, the next thing is to run (in the command line on Linux)

watch -n 1 nvidia-smi

while you are running a command that you believe is utilizing your GPU. You should see a line that ends in python.

1 Like

:frowning: not yet configured ubuntu on my machine - as of now using anaconda on Win10 - is there an equivalent one in Windows?

Probably. I’m not 100% sure though. Somebody else will hopefully be able to help

The previous command spit out this output. Can I take this as any expt I run with torch will use the GPU?

=== Software === 
python version  : 3.6.5
fastai version  : 1.0.11
torch version   : 0.4.1
torch cuda ver  : 9.0
torch cuda is   : available
torch cudnn ver : 7005
torch cudnn is  : enabled

=== Hardware === 
torch available : 1
  - gpu0        : GeForce GTX 1080

Thank you very much. Once I run some stuff I will get to know and learn. Will update the forum with what I learn.


1 Like

You should try to upgrade to pytorch 1.0–it looks like you’re on 0.4.1 right now, and that’s not totally compatible with fastai v1.

Ah, thanks, let me do that right away.

I’m gonna try to save you some time–pytorch 1.0 doesn’t support Windows yet, so I don’t think you’re going to get the install working on your current system.

’nvidia-smi dmon’ and ‘nvidia-smi pmon’ pretty useful as well.

Some of the sanity check commands that you can use to check if all the required libraries are installed properly:

  1. Verify PyTorch 1.0 by bringing up a terminal and type the following command:
python -c 'import torch; print(torch.__version__)'

You should get an output that looks like 1.0.0.dev2018XXXX. Otherwise, your PyTorch framework is not installed properly.

  1. Verify if you’ve installed GPU drivers properly:
python -c 'import torch; print(torch.cuda.device_count());'

You should see more than 0 in the output of this command.

  1. Verify fastai 1.0 library is installed properly:
python -c 'import fastai; fastai.show_install(0)'

You should see an output that looks like the following. Note: your output might vary than mine.

=== Software === 
python version  : 3.6.6
fastai version  : 1.0.x
torch version   : 1.0.0.dev2018XXXX
nvidia driver   : 410.66
torch cuda ver  : 9.2.148
torch cuda is   : available
torch cudnn ver : 7104
torch cudnn is  : enabled

=== Hardware === 
nvidia gpus     : 1
torch available : 1
  - gpu0        : 11441MB | Tesla K80

=== Environment === 
platform        : Linux-4.4.0-59-generic-x86_64-with-debian-stretch-sid
distro          : Ubuntu 16.04 Xenial Xerus
conda env       : fastai-v1
python          : /home/ubuntu/anaconda3/envs/fastai-v1/bin/python
sys.path        : 

For more information, please refer to this troubleshooting guide in fastai developer docs.


Thank you - I spent most of yesterday building pytorch1.0 from source in windows. Don’t know if it works fine now - had to make lots of stupid fixes - even if I get it working I have to figure out how to inform anaconda about the pytorch installation that wasn’t done using conda. I hope v1 libraries work with pytorch 0.4 (for the purposes of the course) - is there anything basic that would break or is the compatibility broken only for some fancy cases?


Thank you Cedric for the detailed instructions. That helps.