If first step of fine_tune complete, memory is enought?

If I can do one forward and eval, should my GPU support the next one??

from fastai.text.all import *

path = untar_data(URLs.IMDB)
dls = TextDataLoaders.from_folder(path, valid='test')
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy, bs=16)
learn.fine_tune(4, 1e-2)

That switch you see there, is after the first eval, so the first step is done.

Also I have changed the bs and see no change in mem usage, should I see a change in here?.

Obviously I get an RuntimeError: CUDA out of memory. Tried to allocate 102.00 MiB (GPU 0; 7.79 GiB total capacity; 6.44 GiB already allocated; 86.12 MiB free; 6.61 GiB reserved in total by PyTorch)

Try reducing your batch size or try google colab which gives a much bigger gpu.

I think I changed the bs there, it doesnt seem to change something on the CUDA memory usage… but maybe Im passing it in the wrong place… or should try with bs=8 hehe…

You have to pass the bs argument in the dataloaders. I think thats where you miss.:slight_smile:

1 Like

You might also want to restart your kernel after getting that error.