I’ve put down a notebook to explore the system of callbacks in the fastai library, which I think is pretty amazing. It lets you customize your training completely as you want.
In particular, I show how to do three different kind of things:
- record some data linked to the training (here validation loss at each epoch)
- change the hyper-parameters when some condition is met (here dividing the learning rate by 10 each time the validation loss doesn’t get lower) I do not endorse that specific training policy, I just chose that example because it’s a very popular one.
- make some parameters change as the training goes (here have the p of a dropout layer vary through training)
I hope you find it useful!