i should mention that I have not specified number of workers and let it to be default. Can you try that? Maybe this is the difference between my code and yours…
Here is my installed library versions:
from fastai.utils import *
show_install()
=== Software ===
python version : 3.7.0
fastai version : 1.0.30
torch version : 1.0.0.dev20181120
nvidia driver : 410.72
torch cuda ver : 9.2.148
torch cuda is : available
torch cudnn ver : 7401
torch cudnn is : enabled
=== Hardware ===
nvidia gpus : 1
torch available : 1
- gpu0 : 16130MB | Tesla V100-SXM2-16GB
=== Environment ===
platform : Linux-4.9.0-8-amd64-x86_64-with-debian-9.6
distro : #1 SMP Debian 4.9.130-2 (2018-10-27)
conda env : base
python : /opt/anaconda3/bin/python
sys.path :
/home/jupyter/fastai-course-v3/nbs/dl1
/opt/anaconda3/lib/python37.zip
/opt/anaconda3/lib/python3.7
/opt/anaconda3/lib/python3.7/lib-dynload
/opt/anaconda3/lib/python3.7/site-packages
/opt/anaconda3/lib/python3.7/site-packages/IPython/extensions
/home/jupyter/.ipython
Maybe…
There is around 1 month difference between mine and yours version. This is also worth trying to check…
Jeremy mentioned that he is getting slower speed on V100 (cloud?) compared to 1080Ti (local?): here
You can compare your pc or GCP performance with others here