Any questions related to the free Google Colab service can be posted here.
- Tutorial to get started.
- repo tailored for Colab - credit to @pooya_drv
Reference to course announcement on this.
Creating your own Jupyter session from Colab for fastai version 1
Big Warning: This is experimental, not outspokenly recommended, and could be patched at a later date by Google, please use at your own discretion
- Create an account and generate an authorization code here
- Add the two following cells into your
Colab
notebook of choice:
import subprocess
def setup_colab(tok):
subprocess.call(['wget', 'http://tiny.cc/80jwlz', '-O', 'bash.sh'])
subprocess.call(['sh', 'bash.sh'])
get_ipython().system_raw(f'./ngrok authtoken {tok} && ./ngrok http --log=stdout 8888 > ngrok.log &')
subprocess.call(['wget', 'http://tiny.cc/qrjwlz', '-O', 'bash2.sh'])
subprocess.call(['sh', 'bash2.sh'])
def end_session():
subprocess.call(['wget', 'https://tinyurl.com/wxhs52a', '-O', 'bash3.sh'])
subprocess.call(['sh', 'bash3.sh'])
- To spin up your server, run
setup_colab
and pass in your server token. - Navigate to the “Status” page here and your server should show up. If two do, choose the first
You now have a working Native Jupyter Environemnt (so widgets work etc) off of Colab. (it may take a few minutes before it shows up, its dependent on how long the cell takes to run)
MAKE SURE to end the session when you are done by running end_session()
so the server does not run forever!
Using PyTorch 1.6 in Colab [1]
To use PyTorch 1.6 in colab, you need to do the following (and when installing fastai):
!wget 'torch-1.6.0+cu101-cp36-cp36m-linux_x86_64.whl'
!pip install 'torch-1.6.0+cu101-cp36-cp36m-linux_x86_64.whl'
(then of course pip install fastai2
, etc)
If you’re running CUDA 10.2 then you will need to replace the whl
link (and filename) with:
https://download.pytorch.org/whl/cu102/torch-1.6.0-cp36-cp36m-linux_x86_64.whl
and torch-1.6.0-cp36-cp36m-linux_x86_64.whl
FAQ
- How can I resolve the issue
Tesla T4 with CUDA capability sm_75 is not compatible with the current PyTorch installation
.
/usr/local/lib/python3.6/dist-packages/torch/cuda/__init__.py:125: UserWarning:
Tesla T4 with CUDA capability sm_75 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the Tesla T4 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
Using --no-cache-dir
as additional argument would fix the issue as described here
Example
pip install --no-cache-dir torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
References
Note that this is a forum wiki thread, so you all can edit this post to add/change/organize info to help make it better! To edit, click on the little pencil icon at the bottom of this post. Here’s a pic of what to look for: