Platform: Colab ✅

I’m having issues when loading the WT103 language model, specifically when I run learn.lr_find() I get an CUDA error: out of memory. Is it possible to use a smaller language model? I believe this one has vocabulary of few millions tokens!

RuntimeError                              Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/fastai/ in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     93         exception = e
---> 94         raise e
     95     finally: cb_handler.on_train_end(exception)

/usr/local/lib/python3.6/dist-packages/fastai/ in fit(epochs, model, loss_func, opt, data, callbacks, metrics)
     83                 xb, yb = cb_handler.on_batch_begin(xb, yb)
---> 84                 loss = loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     85                 if cb_handler.on_batch_end(loss): break

/usr/local/lib/python3.6/dist-packages/fastai/ in loss_batch(model, xb, yb, loss_func, opt, cb_handler)
     17     if not is_listy(yb): yb = [yb]
---> 18     out = model(*xb)
     19     out = cb_handler.on_loss_begin(out)

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/ in __call__(self, *input, **kwargs)
    478         else:
--> 479             result = self.forward(*input, **kwargs)
    480         for hook in self._forward_hooks.values():

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/ in forward(self, input)
     91         for module in self._modules.values():
---> 92             input = module(input)
     93         return input

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/ in __call__(self, *input, **kwargs)
    478         else:
--> 479             result = self.forward(*input, **kwargs)
    480         for hook in self._forward_hooks.values():

/usr/local/lib/python3.6/dist-packages/fastai/text/ in forward(self, input)
    148         output = self.output_dp(outputs[-1])
--> 149         decoded = self.decoder(output.view(output.size(0)*output.size(1), output.size(2)))
    150         return decoded, raw_outputs, outputs

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/ in __call__(self, *input, **kwargs)
    478         else:
--> 479             result = self.forward(*input, **kwargs)
    480         for hook in self._forward_hooks.values():

/usr/local/lib/python3.6/dist-packages/torch/nn/modules/ in forward(self, input)
     62     def forward(self, input):
---> 63         return F.linear(input, self.weight, self.bias)

/usr/local/lib/python3.6/dist-packages/torch/nn/ in linear(input, weight, bias)
   1160         # fused op is marginally faster
-> 1161         return torch.addmm(bias, input, weight.t())

RuntimeError: CUDA error: out of memory

Have you tried reducing the batch sizes?

I thought of that option too, but could not figure out how to set a batch size when working with text DataBunch! Do you have any suggestions how to set it?

pass .databunch(bs=BS) instead of .databunch() considering you are using data_block api

My original dataset is a big json files where each line is a json object itself. So I read this file into a dataframe selecting only the two columns I need input/label, then I stored this back into a csv file to load it later with the block API. This is how I’m trying to load it (it is been more than two hours now and the cell did not finish):

data = (TextSplitData.from_csv(path, '/gdrive/My Drive/data/texts.csv', input_cols=1, label_cols=0)
        .tokenize() #can specify custom arguments for tokenization here
        .numericalize() #can specify custom arguments for numericalization here
        .databunch(TextDataBunch, bs=8192))

I tried with smaller value for bs (16, 32), same thing dead slow, I had to interrupt the cell. I thought the DataBunch does not load all data but uses a lazy mode, why it’s taken so long!

No it’s not lazy, increment your batches slowly at multiples of 2

1 Like

ipywidgets don’t seem to work with colab. Does anyone know of a workaround for the ImageDeleter and relabeler?

1 Like

Hi everyone. I’m Ajay, a high school student from India. I’m new to the forums.

Quick question: is collab compatible with pytorch v1? I’m currently using the !curl | bash line on my collab notebooks. Does this give me pytorch v1?

Colab is compatible w/ the various version of the fastai library. However, the curl command may or may not load the latest version. Check the version with fastai.__version__ or something similar
!{sys.executable} -m pip show fastai.

Thanks, pip show fastai works. Looks like I’m running 1.0.27.

Hey @Descobar14 by any chance have you found any workaround to make this work in Colab?

hey guys, must i run this ```
!curl | bash

You can just run these two lines of code
!pip install torch_nightly -f
!pip install fastai

To check the version of fast ai -
!pip show fastai ( Just to ensure you have the latest version)

Hope this helps!

1 Like

thanks :grin:

You should run it.
The script does more than just import python packages it also creates symlinks and expands memory partitions.

yeah, thanks.i just downloaded the script to see whats in it.

I don’t understand how to set the path for Tabular in Lesson 4. I’m using my own data and have the training file on my local computer and also in my google drive. So, I am able to load it via pandas. But, I run into problems when creating: test = TabularList.from_df(df.iloc[800:1000].copy(), path=path, cat_names=cat_names, cont_names=cont_names) which requires a path.

If you are going to use your data on google drive first you need to mount it with this code:

from google.colab import *

drive.mount('/content/gdrive', force_remount=True)

with “ !pwd ” you can confirm your existing path

with “ %cd “ you can change the path like this:

%cd “/content/gdrive/My Drive”

1 Like

I get this error while i run show batch.I followed the steps for colab ,every thing is latest.
trn_tfms,_ = get_transforms(do_flip=True, flip_vert=True, max_rotate=30., max_zoom=1,
max_lighting=0.05, max_warp=0. )
data = (src.transform((trn_tfms, _), size=224)
File “/usr/local/lib/python3.6/dist-packages/torch/nn/”, line 2092, in grid_sample
raise ValueError(“padding_mode needs to be ‘zeros’ or ‘border’, but got {}”.format(padding_mode))
ValueError: padding_mode needs to be ‘zeros’ or ‘border’, but got reflection

Please help m stuck here

Is there any way to directly download the pets dataset into colab, rather than downloading it into local system and then uploading it to drive(and then mount) ?
Did i miss something !