I’ve been working through Lessons 1 and 2, substituting the default dataset to a dataset of lichen images. Classes are lichen genera.
I’ve been curious to try the strategy in the lesson 2 notebook, of sequentially increasing the image size. I start out at small sizes, 64, training with layers frozen and then unfrozen. Then I increase size to 128, repeating the training in both frozen and unfrozen states. The frozen epochs seem pretty stable:
However, after unfreezing all layers, the accuracy continues to increase, though the losses in the training and validation sets are very unstable:
Would anyone have any idea what is happening here? After epoch 0, it appears as though the model begins to overfit, but then in epoch 4 it jumps back to something that looks promising, and then again descends into overfit territory.