[EDIT] with point 5
Thanks Stas.
Until now (and before you do more experiments), I keep in mind 5 things from your great documentation:
-
learn.purge()
removes any of the Learner guts that are no longer needed and reloads the model on GPU, which also helps to reduce memory fragmentation (copy/paste of your text). - Run
learn.purge()
before any big change in your model training (image size, unfreeze, etc.). - When you run
learn.load()
,learn.purge()
is done by default (no need to run it). - After
learn.export()
, it is a good practice to runlearn.purge()
. - (soon, a learn.destroy implementation) In order to reclaim GPU memory or after a “CUDA out of memory exception”, run
del learn; gc.collect()
orlearn=None; gc.collect()
(they are equivalent codes). Do not forget to reconstruct your learner after (learn = ...
).