I’m running the Movie Sentiment Analysis code snippet from lesson 1, but I’ve hit the same CUDA out of memory error multiple times, despite having 6GB VRAM in my GTX 1060 (laptop version) and a batch size of 16. I’m not sure if there is something else I can do to further optimize the memory usage (either reducing the VRAM used or increasing allocated capacity, though the latter is unlikely, as it already seems to be getting close to the limit). The error is here:
RuntimeError: CUDA out of memory. Tried to allocate 92.00 MiB (GPU 0; 5.94 GiB total capacity; 4.67 GiB already allocated; 49.94 MiB free; 5.13 GiB reserved in total by PyTorch)
Debating spooling up a cloud server, since the GPU model (not manufacture date) is around 5 years old, so that might be outdated enough for the tech we’re using here.