Kaggle language_model_learner CUDA out of memory

I’ve tried reducing bs on TextClasDataBunch.from_csv and bptt, emb_sz on the language_model_learner without success. Anyone know what I might try next?

https://www.kaggle.com/dromosys/quora-insincere-questions-fast-ai-text-v2

CUDA out of memory. Tried to allocate 1.27 GiB (GPU 0; 11.17 GiB total capacity; 6.62 GiB already allocated; 132.25 MiB free; 4.09 GiB cached)

Thanks

This is probably caused by major gpu memory allocation in google cloud so may work if tried later. If running interactive, try restarting kernel before run all to reallocate all possible memory. You can try with less bptt but also note that Fastai assumes labels in first column and text in 2nd if not specified. Not sure if WT103 is allowed if participating in competition.

Thanks got past the memory issue

How? I am facing the same issue