CUDA Out of memory (GPU) issue while lr_find

Hi,

I am getting out of memory (GPU) issue while running lr_find and batch size 2.
I am running lesson_3-planet.ipynb and trying to finetune the model for 256 size images.
Since it ran fine for 2 stages (for 128 images, batch size = 64) .

Is there a way to free up memory here? I am on Paperspace Gradient (P 4000)

RuntimeError: CUDA out of memory. Tried to allocate 9.00 MiB (GPU 0; 7.93 GiB total capacity; 6.85 GiB already allocated; 6.56 MiB free; 21.28 MiB cached)

Update : I restarted the kernel and memory was freed. Is there any other way to do it?

This may help: https://docs.fast.ai/dev/gpu.html#reusing-gpu-ram

1 Like

I’ll try

Did you ever solve this? I’m also doing the lesson 3 notebook and getting the out of memory error even with batch size 1, when using lr_find.