Determining when you are overfitting, underfitting, or just right?

I’m going to respectfully disagree and say this is looking like a good fit. :slight_smile:

Why?

Well notice that while your training loss outperforms you validation loss at around batch 3000, your validation loss is still going down. Overfitting is when your validation loss starts getting worse as your training loss improves … which isn’t the case here. It is improving (just not as much as the training which is expected and desirable).

4 Likes

i think here the model is underfitting. Your training loss always > validation loss. Are you using LR based on learning rate finder ?. Also try increasing the size of your hidden layers. If this a tabular model, have a look at An Attempt to Find the Right Hidden Layer Size for Your Tabular Learner

I prefer to wgpubsWG’s answer. I am just a newbie to deep learning.

Can you take your feedback about my result? I read previous comments and posts here so it looks like ok. But I need other comments. Thank you.

image

I have a problem with my network which is an Unet!

At the very beginning of training, the validation loss starts increasing, and training loss decreases. It is obviously an overfitting problem. I decreased the number of the trainable parameters, used batch normalization and dropout layers, increased L2 regularization parameter, applied different normalization for reference and training datasets, but neither of them did work for me and the result is the same! Do you have any other suggestions?

guys can you check this plot