Want to know to if the Model is overfitting or underfitting or it is correct

I am training my model on Cifar-10 Dataset with DenseNet Architecture

Here is the info about the model for the first 50 epochs
No of parameters:- 98k
Training loss:- 0.7143
Training acc: 0.7890
Validation loss: 0.7242
Validation acc: 0.8020

After another 50 epochs

Training loss:- 0.5796
Training acc: 0.8353
Validation loss:0.5531
Validation acc: 0.8478

Because the Validation accuracy is greater than the Training Accuracy is the model fitting perfectly rather then Overfitting or Underfitting

Those loss values are pretty close in the relative scheme of over/under fitting I’ve seen, so I think you are fitting perfectly. I’m getting as much difference as 0.001 training loss vs. 0.6 validation loss, which is very clearly overfitting BIG TIME

things are good as long as your accuracy increases.

However have as look at courses/dl2/cifar10-dawn.ipynb to see example of a much faster convergence

I’m wondering how your select your validation set. Is it using fastai built-in get_csv_idxs that randomly selects 20%? Could it be that by chance your validation set just happened to be easier compared to your training set. But seems like a result to be happy about.