I get the error OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 2.00 GiB total capacity; 1.61 GiB already allocated; 0 bytes free; 1.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
In task manager, performance tab in windows i can see that I have GPU 0 and GPU 1. The GPU 1 is NVIDIA GeForce MX450 and has 10 GB memory while the error states that it is not able to allocate 26 MiB.
I am on chapter 4 " Getting started with NLP for absolute beginners" and as you suggested last time I tried with smaller number of batch sizes till bs=7 but I am still facing the error
CUDA out of memory. Tried to allocate 376.00 MiB (GPU 0; 2.00 GiB total capacity; 1.45 GiB already allocated; 0 bytes free; 1.51 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The error doesnt seem to change.
What do you suggest ?
PS: After each trial I closed my environment and restarted everything.
It is never going to be much fun trying to run models on a 2Gb card, you are going to spend most of your time running into out of memory issues. Lowering batch size until it fits is one option. But using a free service like Colab or Kaggle will allow you more memory, larger batch sizes and most likely faster training than you can achieve on your 2Gb card, and with a lot less pain.