I’m struggeling with fastai 2.1.2 on Google Colab. This minimal example fails:
!pip install fastai --upgrade
path = untar_data(URLs.IMDB)
dls = TextDataLoaders.from_folder(untar_data(URLs.IMDB), valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
Gives this error message:
RuntimeError Traceback (most recent call last)
<ipython-input-9-c975fcd805ea> in <module>()
7 learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
----> 8 learn.fine_tune(4, 1e-2)
/usr/local/lib/python3.6/dist-packages/torch/tensor.py in __torch_function__(cls, func, types, args,
994 with _C.DisableTorchFunction():
--> 995 ret = func(*args, **kwargs)
996 return _convert(ret, cls)
RuntimeError: The size of tensor a (400) must match the size of tensor b (0) at non-singleton dimension 2
By the way, when trying to dump environment parameters by this also fails:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-10-c60446c1e005> in <module>()
1 from fastai.text.all import *
----> 2 import fastai.utils.collect_env
ModuleNotFoundError: No module named 'fastai.utils'
Any support is much appreciated
My apologies. Seem to have found a solution - will document should anyone experience the same. Problem seem to originate from fastai and pytorch versions not matching. The below works better:
!pip install torch==1.6 --upgrade
!pip install torchvision>=0.6.0 --upgrade
!pip install fastai==2.0.18 --upgrade
Does it means that it is better not to upgrade fastai to latest version (fastai 2.1.2 that works with pytorch 1.7) and stay with a version of fastai (2.0.18 for example) that worked well with pytorch 1.6?
I’m no expert but I’ve not been able to make it work with pytorch 1.7 on Colab so far. I’m getting the error
“the nvidia driver on your system is too old (found version 10010)”. Then I’ve tried different combinations as described on https://pytorch.org/, e.g.
pip install torch==1.7.0+cu110 torchvision==0.8.1+cu110 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
… to use CUDA 11.0. So far no luck and I’ve reverted to pytorch 1.6. Hope someone more experienced can show the way
Thanks. I did like you, waiting for a version consolidated of fastai v2 with pytorch 1.7.
Just for completeness - will document should anyone experience the same problem. Thanks to Jeremy’s recent release of fastai 2.1.3 the problem seems to have been resolved. I now am able to use pytorch 1.7 at Colab:
!pip install torch torchvision --upgrade
!pip install fastai --upgrade
Here the announcement of Jeremy on Twitter:
Hello ! Did the upgrade to fastai 2.1.4 work okay for you ? I had the same issues as you but when I switched to fastai 2.1.4 with pytorch 1.7.0, I encoutered this issue during training
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2, 10]], which is output 0 of TanhBackward, is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
Any feedback would be appreciated!