If a model can overfit well on train set, does it mean I CAN generalize well?

Let’s say I train a (very complex) model and achieve 100% accuracy on train set. Is it right to say that this model will generalize well on validation/test set if I adapt methods to overcome overfitting?

Currently I’m working on a dataset where I’ve achieved 100% on train set, whereas my validation accuracy is around 52%. It hasn’t dropped. Its been steady at 51-52-53% for the last 2000 epochs. Can I hope to improve validation accuracy/train model that generalizes well?

No matter how complex a model I train, Validation accuracy reaches 52/53% and does not improve beyond that. Hence, the question, if I can improve val accuracy beyond 52/53%


Overfitting the train set means that your model has enough representational capacity to fit the train dataset. Most of the time, it means you have enough trainable parameters.
It doesn’t tell much on the generalization potential of the model. That is why you need to try different regularization techniques, and different model architectures, before concluding.