I would like to sometimes be able to not run validation loss calculation at the end of each epoch to save time.
Is there a better way to do so than:
learn.data.valid_dl = None
(this seems to work but doesn’t seem ideal)
If I pass None to valid_ds when creating my databunch, we still have a valid loss being calculated (I imagine on the training data).
Ideally we could have the training loss be calculated every n number of epochs. This seems like a whole other story, but if it’s easy to do please let me know.