I really like Colabotary as its free, uses a GPU (from what I have read) and uses a jupyter book like environment and wanted to see if anyone else was using it for Fastai. I have managed to load the required dependencies and wanted to start a thread we can use for trouble shooting.
@ecdrid that would be great! Having issues understanding the directory process, for example uploaded labels.csv but don’t know exactly how to access it in the notebook
How does sharing work with the 12 hours free GPU credit? If I create a notebook and share it with someone can they get a copy and run it independently of me? I want to teach some people but they need to be able to use it independently.
Hi, Please find my post on the same. I am facing issues for while training the full network with SGDR and differential learning rates. I am exploring and will be updating the post accordingly.
Hey everyone! I hope you are enjoying Google Colab as much as I do. I would like to share some tips to make Google Colab easier to use day-to-day. Here’s some convenient ways to upload a folder of files like large data sets from your local computer.
I’m trying to get started doing the first lesson of the course, and I’m having issues with the Google Colaboratory environment. I’ve gotten the necessary packages installed, but when I try to run the pre-trained model, I’m given an error:
RuntimeError: cuda runtime error (2) : out of memory at /pytorch/torch/lib/THC/generic/THCStorage.cu:58
I’ve tried reinstalling packages and updating/downgrading things, and have tried restarting the VM. I also tried ‘rm -rf {PATH}tmp’ which didn’t help. I’ll link my notebook below in case anyone wants to have a look, thanks for any help!
To test running on GPU I did PyTorch’s very beginning tutorial code, which has you perform matrix addition on the GPU. I still got the out of memory error for that commands, the above is output on the notebook where I did the first PyTorch tutorial, so should not have been an issue but it was.
UPDATE 1/29/18 @ 5:25 PM
Solved the issue, on Google Colab when you accelerate your notebook using GPU, every bit of code you run in the cell is automatically compiled and sent to GPU. Calling functions in the code such as .cuda() in pytorch, which compile specific items for GPU, leads to an error because you are essentially trying to double compile something, which doesn’t work. I had to fork the fastai GitHub repository and edit the convnet file so that the function ‘to_gpu()’ is removed and that fixed the issue. I’m anticipating needing to make such modifications to each lesson’s code as I go through them, and I intend to share all of the code I change at the end.