Incereasing lr or number of epochs to solve underfitting?

Hello there, I am kind of confused. From lesson 2, I remember Jeremy offers increasing learning rate or number of epochs might solve the problem of underfitting for our model.

Lets say I have model with training loss = 1.00 and validation loss=0.10, and lets say this validation loss and corresponding accuracy is actually enough for my application, but our model is obviously underfitting. So should I still consider that there is a problem and avoid using this underfitted model?

And if increasing lr or #of epochs results something like 0.40 training loss and 0.40 validation loss and lower accuracy compared to the previous situation. Although now I have higher validation loss and lower accuracy should I consider the second model a better one just because now it is not underfitting?

Thanks in advance :slight_smile:

1 Like

Don’t worry about that, see which one out of both gives you better accuracy on test set. Most probably you will see higher accuracy for second one.