ULMFit memory issues

I’m running the IMDB notebook on a GTX1070 (8G memory).

Changing the batchsize from 52 to 30 was sufficient to get me all the way to the very final .fit call without running out of memory.

This is the final calls:

learn.fit(lrs, 1, wds=wd, cycle_len=14, use_clr=(32,10))

What can I do to reduce the memory usage there?

Try adjusting BPTT in additino to batch size.

Thanks, that worked eventually.

Does cycle_length affect it too?

BPPT is holding data/results from back in time so there needs to be memory allocated to hold that data on the gpu. Cycle length just changes the learning rate schedule so memory should be unaffected.

I’m stuck at this very same line, also getting out of memory error. I have already set batch size to 8 and bptt to 20, and still getting out of memory. I’m running a GTX 1070.

What are the exact values you used to train the classifier? Anything else I could be doing besides changing bs and bptt?

I successfully trained on a 1070 using bptt=70, bs=30

You’re talking about running train_clas.py script, right?

This script has 3 calls to learn.fit (not sure why, I’ll watch the video again). The first two calls in that if (startat…) line, for only one epoch, goes ok, but by the time it gets to the last call, I get the out of memory error. I wonder if it’s possible to empty the GPU memory after the first 2 calls and before the call to the third.

Does this make sense?

No, I’m talking about the IMDB notebook linked from my post.

The memory doesn’t need clearing out.

What worked for me was this suggestion from Jeremy: