I’m traning CycleGan with custom dataset using fastai sample on Github.
I fit for 10 epochs and then fit again and again due to Colab’s limitations. Every time I fit again it seems training loss does not continue from where previous training finishes. Is this normal behavior or am i missing something?
If Colab’s session disconnected, I reload the saved model using the code below. learn = learn.load('/content/drive/My Drive/charcoalDatasetV2/models/epoch_9')
If training finished and session is still active, I don’t load again and continue with the trained model.
Last session’s last epoch’s result was 2 and the result of first epoch of in the new session is 3.65. I will check again the actual starting point.
The actual starting point is right, it started where I left. In this case I didn’t reload the model. I will check with reloading saved model.
Previous training
I checked reloading model and It’s also starting from where I left. I guess everything is okey. I was checking the result of first epoch not the actual starting train loss. Thanks for your help @muellerzr.
For the SaveModelCallback, should I use monitor parameter as ‘accuracy’ or ‘train_loss’ for CycleGan to save a model with min train_loss? @muellerzr [SaveModelCallback(learn, every='epoch', monitor='accuracy', name='model')]
I am able to save with every epoch setting but not with improvement. I tried both the codes below but didn’t manage to save. [SaveModelCallback(learn, every='improvement', monitor='accuracy', name='model')]
and [SaveModelCallback(learn, every='improvement', monitor='train_loss', name='model')]
It doesn’t give any anormal output and training is performed but files are not saved.