Am I overfitting?

Hi all, I have an interesting predicament. My intuition tells me this is okay but I wanted the hives opinoin. I have a particular tabular model where when I have finished training, my train_loss is 0.74 and valid_loss is 0.4417, with an overall accuracy of 84.6%. Now, of course I analyzed this on a test set and my test set showed 84.94% accuracy. Slightly higher than the validation accuracy. Should I be worried? I did a random subsample of 70/20/10 to generate the three datasets.



Hey Zachary,
Usually, we try to keep our validation and test sets alike, that is, roughly we want them to follow same distribution. In that case, our validation set is a good estimator of the test set error.
Also, overfitting is the case when your training set accuracy is much much higher than your validation set. So, I won’t be really worried about overfitting here.