Fitting CycleGan several times

I’m traning CycleGan with custom dataset using fastai sample on Github.

I fit for 10 epochs and then fit again and again due to Colab’s limitations. Every time I fit again it seems training loss does not continue from where previous training finishes. Is this normal behavior or am i missing something?

 learn = None
 cycle_gan = CycleGAN(3,3, gen_blocks=9)
 learn = Learner(data, cycle_gan, loss_func=CycleGanLoss(cycle_gan),
                 opt_func=partial(optim.Adam, betas=(0.5,0.99)),
                 callback_fns=[CycleGANTrainer] )

I run the code above one time and run the code below several times.

learn.callbacks =[SaveModelCallback(learn, every='epoch', monitor='train_loss', name='epoch')], 1e-4)

Do you load the saved model back in when you make your learner again?

If Colab’s session disconnected, I reload the saved model using the code below.
learn = learn.load('/content/drive/My Drive/charcoalDatasetV2/models/epoch_9')

If training finished and session is still active, I don’t load again and continue with the trained model.

How bad is the difference? Some setback I’ve noticed in other models but nothing too out of the ordinary (Maybe 1 to 1.5)

In the last trial, It went from 2 to 3.65

Is that where it started training at too?

Last session’s last epoch’s result was 2 and the result of first epoch of in the new session is 3.65. I will check again the actual starting point.
The actual starting point is right, it started where I left. In this case I didn’t reload the model. I will check with reloading saved model.
Previous training

Current training

I checked reloading model and It’s also starting from where I left. I guess everything is okey. I was checking the result of first epoch not the actual starting train loss. Thanks for your help @muellerzr.

For the SaveModelCallback, should I use monitor parameter as ‘accuracy’ or ‘train_loss’ for CycleGan to save a model with min train_loss? @muellerzr
[SaveModelCallback(learn, every='epoch', monitor='accuracy', name='model')]

I am able to save with every epoch setting but not with improvement. I tried both the codes below but didn’t manage to save.
[SaveModelCallback(learn, every='improvement', monitor='accuracy', name='model')]
[SaveModelCallback(learn, every='improvement', monitor='train_loss', name='model')]
It doesn’t give any anormal output and training is performed but files are not saved.