How to estimate the required GPU Memory for a Model?

I am currently trying to implement a model with the data from the Amazon Rainforest Kaggle Competition.

Using the resnext architecture, i ran into gpu memory issues that i managed to solve by successively changing the batch size. I am not satisfied with that approach, because i think that it should be possible to make an educated guess on the memory size needed based on the architecture of the model.

Based on this quora answer, you should generally be able to get a good approximation from the layer, node, and edge count of the Neural Net.

Is there some way to get this information about the model from the fastai library?

I have discovered learn.get_activations(), however to use this information, significant parsing would be required. Is there a more straightforward way to get the required information about the model?

Ideally i want to write a function that approximates the required GPU Memory of an architecture based on parameters such as image- and batch-sizes. This would allow me to get the model working on different machines without doing a lot of trial and error and thus save some (expensive) computation time.

Iā€™d also be happy to get some other suggestions. Maybe I overlooked something obvious.

1 Like

A quick search through resulted in finding this estimator:

1 Like