I am training my model on Cifar-10 Dataset with DenseNet Architecture
Here is the info about the model for the first 50 epochs
No of parameters:- 98k
Training loss:- 0.7143
Training acc: 0.7890
Validation loss: 0.7242
Validation acc: 0.8020
After another 50 epochs
Training loss:- 0.5796
Training acc: 0.8353
Validation loss:0.5531
Validation acc: 0.8478
Because the Validation accuracy is greater than the Training Accuracy is the model fitting perfectly rather then Overfitting or Underfitting
Those loss values are pretty close in the relative scheme of over/under fitting I’ve seen, so I think you are fitting perfectly. I’m getting as much difference as 0.001 training loss vs. 0.6 validation loss, which is very clearly overfitting BIG TIME
I’m wondering how your select your validation set. Is it using fastai built-in get_csv_idxs that randomly selects 20%? Could it be that by chance your validation set just happened to be easier compared to your training set. But seems like a result to be happy about.