I’m using a lightly modified version of the train_imagenette example notebook. While doing some testing with larger numbers of runs
I noticed that GPU memory usage continued to creep upward. This is a bit of a problem because it eventually runs out of memory, limiting the number of runs I specify.
It seems as if fastai2 is leaking GPU memory somewhere? In main
the only thing thing that stays alive between iterations is the dataloader.
Has anyone encountered a similar bug and knows a fix? Or what tools can I use to track down where the GPU memory is being leaked?
Thank you.