I am training a model in Kaggle Kernels. I was able to successfully train the frozen model. However, when I unfreeze the model and run lr_find, I receive a CUDA out of memory error. From what I understand from this tutorial, by reloading the model with
learn.load(), it should “purge” the memory, freeing it up for later usage. So I added that command, but I still receive a CUDA out of memory error. I may not be fully understand how the GPU memory is being used so I might not be doing this correctly.
How can I resolve this problem?