Lesson 3 : GPU out of memory

In Lesson 3 -

When I run:

`data.show_batch(2, figsize=(2,3))`

I get:

  `RuntimeError: CUDA error: out of memory`

I checked my GPU memory its - 8GB - using the following command



`CudaDeviceProperties(name='GeForce RTX 2070 SUPER', major=7, minor=5, total_memory=7982MB, multi_processor_count=40)`

However when I run:

free = gpu_mem_get_free_no_cache()
I get 86MB memory -

  • How do I increase the GPU memory allocation to above 86MB?
  • Is this advisable?
  • Is this set in the virtual environment - I used Conda to install PyTorch etc.

I’ve tried restarting the kernal and reducing batch sizes - That doesn’t seem to have any effect.


What do you see when you run the following in a separate terminal (run this then start your jupyter terminal and watch the memory useage as you step through the notebook)?

watch -n 1 nvidia-smi

Reducing batch size you should see decrease in mem useage-make sure your dataloader is actually told of the batch size - from memory run (if not correct you may need to do some digging for it) after you set the bs and check if correct:


1 Like

Adrian, Thanks for response.

As you have suggested I ran tthe watch -n 1 nvidia-sm command.

It turned out I had multiple instances of Jupyter notebooks running. As such I closed the extra ones and everything works as expected!! Noob mistake :confused: