Hi,
Is it possible to free up some unwanted memory after every iteration of learn.TTA()
I am facing up a problem wherein the GPU memory gets filled up in the 6th iteration and it throws a CUDA memory error. I am using a GTX 1080ti with 11 gigs of memory.
Thanks for the reply. I am pretty new to fastai, I am not pretty sure whether a callback can be added to the learner at the inference phase. Is there a way we can add a callback at the inference phase as well.
Callbacks are added when learn is created. Inside that callback add the condition to perform GPU clean up when the model is in eval mode only. learn.model.training can help you with that.