I am having problems with CUDA out of memory error on Kaggle.
RuntimeError: CUDA out of memory. Tried to allocate 1.63 GiB (GPU 0; 15.90 GiB total capacity; 13.57 GiB already allocated; 993.88 MiB free; 14.27 GiB reserved in total by PyTorch)
I read through the GPU issue section
As well as a number of questions posted here and other places with no real luck.
The dataset is tabular data with 1,557,000 data points. There are 66 features to the dataset.
I have tried running the smallest model and I get the CUDA error i.e.
layers of [1, 1] and batch_size =1
I am unable to run learn.fit_one_cycle without hitting the CUDA error.
I have tried refreshing the kaggle page and confirming that the GPU is empty and then running, but continue to get the problem.
Is it likely that there is something weird in my dataset that is causing a huge data leak somehow? I have run models on kaggle with far more than 1.5 million datapoints before.
Any ideas are much appreciated.