I am working on Colab. Can anybody say why is it so that after I save a model, close the machine and then load it in a new notebook, it gives a kind of a cold-start; the losses are high and I have to train it for one epoch and then only the metrics become somehow desired. However the model is underfitting; and loses the state of the art result it produced after as huge as 10 to 12 hours of training. Moreover, it’s not always so like in here. Here are two notebooks as a testimony, can anyone of you please say why such happens, or where am I going wrong?
One thing to note
rn50-7-1 was fine-tuned with this dataset, and it was built using the techniques discussed by Jeremy in Lesson 3, that is increasing size of image patches during training; here it was 32x32, then 64x64, then 128x128,and now 224x224.
rn50-8-2 was produced by unfreezing
rn50-7-1and then applying transfer learning.
rn50-7-1 gave an accuracy of 95.4 and
rn50-8-2 gave accuracy of 95.76.