I am just walking through the following blog post:
I noticed that the GPU memory usage is independent of the batch size. I’ve tested this in a kaggle notebook, a colab notebook and on my own machine. I actually only noticed it, because the notebook does run in a GPU memory limit on my machine with only 8GB of GPU ram. Why is that?
The above notebook uses 11009MB. I use the following code to report the GPU memory utilization: https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available
I deally I’d like to run the notebook on my local machine. On my local machine, if I truncate the train data-set it runs. But I would have guessed that the factor that determines the GPU memory utilization most would be the batch size and not the input train dataset?
Thanks a lot and best regards,