RuntimeError: CUDA out of memory but terminal suggests I still have memory available

When trying to execute another command in JN I get the following error:

RuntimeError: CUDA out of memory. Tried to allocate 49.00 MiB (GPU 0; 7.93 GiB total capacity; 7.39 GiB already allocated; 2.56 MiB free; 15.16 MiB cached)

but when I check in my terminal I can still see a lot of memory available on my GPU:

              total        used        free      shared  buff/cache   available
Mem:          30150        2549       22805          19        4795       27334
Swap:             0           0           0

Can anyone please shed some light what the issue here is?

3 Likes

As far as I can understand, it’s a limitation with Jupyter/pytorch (which tends not to free memory when something bad happens).

My solutions so far:

  1. Make sure that you have pytorch >= 1.0
  2. Update fastai
  3. Restart the Kernel
2 Likes

I’ve tried all 3 solutions along with decreasing the batch size and unfortunately the error persists.

Don’t know what to suggest then. Be aware that sometimes Jupyter gets confused with various versions of the libraries. So make sure to check from within JN torch.__version__ and fastai.__version__