Fastai v1 tabular GPU memory independent of batch size?

Hello,

I am just walking through the following blog post:

I noticed that the GPU memory usage is independent of the batch size. I’ve tested this in a kaggle notebook, a colab notebook and on my own machine. I actually only noticed it, because the notebook does run in a GPU memory limit on my machine with only 8GB of GPU ram. Why is that?

The above notebook uses 11009MB. I use the following code to report the GPU memory utilization: https://stackoverflow.com/questions/48750199/google-colaboratory-misleading-information-about-its-gpu-only-5-ram-available

I deally I’d like to run the notebook on my local machine. On my local machine, if I truncate the train data-set it runs. But I would have guessed that the factor that determines the GPU memory utilization most would be the batch size and not the input train dataset?

Any ideas?

Thanks a lot and best regards,
Christian

Nice bit of code…
but you shouldnt have the installs in the code…
not everyone wants what you did and may not notice… (i didnt)
and kind of unfair…

my results though [on my desk top machine]
Gen RAM Free: 19.8 GB | Proc size: 59.2 MB
GPU RAM Free: 11710MB | Used: 578MB | Util 5% | Total 12288MB