To use Google Colab with FastAI v1, set Runtime->Set runtime type to GPU
Then run the following code in Colab to install updated versions of PyTorch and Fastai.
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install torch_nightly -f https://download.pytorch.org/whl/nightly/{accelerator}/torch_nightly.html
import torch
print(torch.__version__)
print(torch.cuda.is_available())
print(torch.backends.cudnn.enabled)
!pip install fastai
import fastai
from fastai import *
from fastai.vision import *
This should work fine with MNIST, assuming you got the full GPU RAM instance. It works fine for me. But it doesn’t work with Dogs and Cats. Anyone know of a way to get it to work?
# dogs and cats example code
path = untar_data(URLs.DOGS)
print(path)
data = ImageDataBunch.from_folder(
path,
ds_tfms=get_transforms(),
tfms=imagenet_norm,
size=224
)
img,label = data.valid_ds[-1]
img.show(title=data.classes[label])
learn = ConvLearner(data, models.resnet34, metrics=accuracy)
learn.fit_one_cycle(1)
Error:
RuntimeError: DataLoader worker (pid 216) is killed by signal: Bus error.
Checking RAM with
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
yields:
Gen RAM Free: 11.8 GB | Proc size: 1.9 GB
GPU RAM Free: 10339MB | Used: 1102MB | Util 10% | Total 11441MB
So I’m getting the full Colab GPU RAM allotment.
SO and PyTorch forums don’t directly address the issue, but it seems like it’s a memory allocation issue. Setting num_workers = 0 might help, but I don’t see where to do that. Anyone have any ideas?