Thank you. I am also using colab instead of amazon aws and paperspace. I could not locate the notebook for Part 1 version 2. Can anyone link me up?
We need to upload them?
Or find them with the fast.ai directory?
if we use paperspace the notebooks get downloaded while setting up the environment
I searched for the notebooks of part 1 version 2 but couldnāt find.
The Jupyter notebooks for 2018 version (part 1 v2) of the course is in fastai GitHub repo. Hereās the direct link to get all the notebooks: https://github.com/fastai/fastai/tree/master/courses/dl1
Yes. Thatās because, during the setup process, one of the command in the Paperspace script git clone
the fastai GitHub repo where all the notebooks is.
Okā¦ Thank you @cedric for your help
Thank you so much!
I followed your notebook, when I got to " !mkdir data && wget http://files.fast.ai/data/dogscats.zip && unzip dogscats.zip -d data/," the cell is successfully run. However, the image folder does not appear in my folder.
I tried other commands, they all appear successfully downloaded inside my Colab notebook,however they donāt show up in the folder.
When I tried the command above again, it says,āmkdir: cannot create directory ādataā: File exists.ā And I couldnāt find it anywhere in google drive.
I tried creating new folders and start new colabs,but wget doesnāt work.
This doesnāt work either:
"!wget https://raw.githubusercontent.com/vincentarelbundock/Rdatasets/master/csv/datasets/Titanic.csv -P drive/app"
hi
can you explain how you did it exactly iām getting same error
RuntimeError: cuda runtime error (2) :out of memoryat/pytorch/torch/lib/THC/generic/THCStorage.cu:58
thank you
Hi,
They are not stored in your google drive. They will be just temp files on that particular instance.
Hi Dinesh,
So what I did to fix the issue was to fork the fastai GitHub repository and then in the āconv_learner.pyā file, on lines 48 and 50 there was a call for āto_gpu()ā function, I simply deleted the call to gpu and cloned my edited version of the library on colab and it worked. You can clone my altered version of fastai here:
Iād recommend you also check to see if your GPU is actually out of memory, because then that is a completely different issue as well. Also, itās worth noting that since the time I made my original post, I have not had that issue again at all on Google Colab. Iām not sure why that is but Iāve been able to run everything that was throwing the issue previously, without any problems. I hope all of this helps.
Feel free to ask any other questions!
Best,
Jacob
Hi Jacob,
Thanks for the reply. I need one more thing from you.
How did you link your edited version onto colab notebook?
Hi Dinesh,
Happy to help. First, ensure that you do not have the main version of fastai installed, if you installed using pip, run ā!pip uninstall fastaiā should work, otherwise try ā!rm -rf fastaiā. I linked my edited version into the colab notebook using ā!git clone āpaste link hereāā. Hopefully this should work for you too. If you run into an issue using the edited version of fastai let me know, Iāll do what I can to fix it but of course you may fork your own version on GitHub and work on it yourself if you like!
Strange! I ran the notebook as is (at least lesson-1.ipynb) without the to_gpu
edits. It seem to work just fine. I wonder what the difference is
Colab GPU is almost twice as slow as using the CPU on my laptop. Has anyone else noticed this?
Itās definitely been weird, Iāve had it work without the to_gpu edits as well but as youāve seen I also had to fix it once before. Since Iāve done that fix, I havenāt had an issue. Hereās hoping that issue doesnāt come up again!
I am getting error
AttributeError: module 'PIL.Image' has no attribute 'register_extensions'
for the cell
img = plt.imread(f'{PATH}valid/cats/{files[0]}') plt.imshow(img);
when I am running the notebook for lesson1 on google colab. I following this guide
Anyone who got the same error, or whizzed past this?
Yes, Iāve had the same error yesterday for a while, but then as mentioned in the forum I commented out:
#%reload_ext autoreload
#%autoreload 2
%matplotlib inline
and that worked fine.
But today itās not working even after commenting out those 2 lines.
I too am using Google Colab.
I had an issue with PIL at one point and was able to fix it by installing version 4.0.0 (I think) of pillow. Not sure if yours is the same issue I had but downgrading pillow may be worth a try since it is a one line bash command, I forget the exact command but a quick google search will show it. Pillow contains PIL and I think some other image library, so that is why it is the thing to get older version of.
Yes my issue is resolved now by downgrading Pillow to 4.0.0