We’d like to introduce another option for getting DL1/2 running on a GPU. Spell is a command line tool for super simple remote execution - like the bash & operator, but for sending work off to a cloud instance. It’s great for developing for DL locally but seamlessly running on a cloud GPU.
We spent some time getting the fast.ai notebooks to work easily with Spell. To run the notebooks, sign up for a spell account and install the cli
$ pip install spell
$ spell login
Then grab the course materials and start Jupyter within them like this:
$ git clone https://github.com/fastai/fastai.git && cd fastai
$ spell jupyter --machine-type K80 \
--conda-env fastai \
--mount public/tutorial/fast.ai:data
Jupyter will be running locally, but be wired to a kernel with a K80 and the fastai conda environment loaded. One Jupyter starts, switch to the “Spell - fastai on K80” kernel by clicking “Change kernel” under the Kernel menu (you only have to do this once).
Questions/feedback welcome - we’ll be hanging out here and in slack.
I should also mention that we’re giving out $300 in free credits, which ought to be enough to get through the course without paying