I just setup a new machine of my own with core i5 and gtx 1070. I installed,NVIDIA drivers, CUDA and did fastai setup. However, when I am trying to run the example CATS and Dogs model, its taking forever to return any results after downloading the resnet model. How do I make sure the learner is using GPU and not CPU ?
You can run in your terminal:
watch -n 1 nvidia-smi
Check out the Volatile GPU-Util column.
Thanks !! It says 0% Default but I do see it as one of the GPU processes at the bottom. Am I missing something ? Why is it taking so much time ?
Cool, now you could test if pytorch is using your GPU.
Check it out:
As we work on setting up our environments, I found this quite useful:
To check that torch is using a GPU:
In : import torch
In : torch.cuda.current_device()
In : torch.cuda.device(0)
Out: <torch.cuda.device at 0x7f2132913c50>
In : torch.cuda.device_count()
In : torch.cuda.get_device_name(0)
Out: 'Tesla K80'
To check that keras is using a GPU:
import tensorflow as tf
and check the jupyte…
Yes, it does look like pytorch is using GPU from the steps mentioned in that thread.
How long is a lot?
Loading the images take time
5 - 7 min aceptable
More maybe not
Its working now. It turned out, just adding /usr/local/cuda-9.0 to PATH didn’t help. I now added a variable CUDA_HOME to .bashrc. Thanks for the pointers !!