I’m really enjoying the deep learning course.
I’ve set up my VM on google cloud as they offer $300 free credit.
I’m trying to go through the lessons in great detail, making notes, and really attempting to understand what is happening and what the fast.ai library is doing.
This is a slow process (for me!) and it means that I’m spending a large amount of time with the GPU idle.
Any recommendations on how to set everything up so I only use the GPU as needed when I’m actually training the models?
I thought about maybe setting up one CPU instance to study, and a seperate GPU instance to train, but my understanding is that fast.ai needs a GPU to run?
My local machine is a 2012 macbook pro, maybe I could set up fast.ai on it? but again, it doesn’t have a nvidia gpu.
I’ve had a look at Crestle and the GPU on off functions seems great! But $300 of free credit from Google is hard to ignore.
Thanks for the help,
I’ve been developing locally on my Mac (anaconda Docker image, then install fastai cpu version) and then spinning up a cloud GPU instance when needed. I keep my code in a GitHub repo so it’s easy to migrate code up when I’m ready for the GPU.
Note I’ve been working almost entirely on structured data or NLP problems, which are easier to do on a CPU. But you should still get good enough performance on images if you’re just testing out how it works on a few of them.
@Tchotchke can you please inform how you are doing it(steps to configure to run cpu only version, I tried running environment for cpu but code still produces error)?
My GPU is creating problems(pytorch doesn’t support my GPU anymore).
The following should be close - there may be a few other little things to do inside the Docker container, but the Dockerfile looks like:
RUN apt-get update
RUN apt-get install -y libgl1-mesa-glx
RUN git clone https://github.com/fastai/fastai.git
RUN conda env update
# Note - I don't think this actually activated the environment
SHELL ["/bin/bash", "-c", "source activate fastai"]
# This will enable images and plots to show up in the notebook
RUN jupyter nbextension install --sys-prefix --py widgetsnbextension
RUN jupyter nbextension enable --py --sys-prefix widgetsnbextension
docker run , you’ll need to expose port 8888. After entering the container you’ll need
source activate fastai. To run the jupyter notebook, you’ll want to run
jupyter notebook --ip 0.0.0.0 --allow-root. Also, I noticed that in the jupyter notebooks, you need to
import torch before anything else.
There may be one or two other tricks - eventually I’m planning on writing this up a little more clearly, but I haven’t yet had the time.
I haven’t used it for fast.ai but I guess you can try google colab : Colaboratory and Fastai
It provide GPU up to 12 hours. You can experiment/code on google colab and move the code to VM on google cloud once you need to run it longer time.
wow! I hadn’t come across google colab. Free GPU for 12 hours at a time? That’s great! Will give it a shot.
Thanks everyone for your help!