Mi name is Jonathan and I write from Chile … I’m taking the course for coders v3, and I’ve heard once Jeremy saying tha’s is not a good model behavior having training losses higher than validations losses.
I’m playing with MNIST dataset, and these are my results:
Training size: 40,8K | Validation size: 10,2K | Freezed model | Resnet34
Note that just the las two epochs have lower training losses than validation’s
Training size: 40,8K | Validation size: 10,2K | Unfreezed model | Resnet34
I just want to know if I’m doing it well. I just try to follow Jeremy’s advice, and also from what I learnt from Andrew Ng about learning curves in lesson 6 from his Coursera course “Machine Learning”, who also said that a well trained model tended to be something like this (I think the principle applies no matter Andrew Ng’s lesson was about Linear Regression):
- ¿Did my first freezed model need to be fine tuned? I said it becase the training and validations losses curves (not always the training loss was lower than the validation’s)
- ¿Did my second unfreezed model do well? I said it becase the training loss was always lower than the validation’s. However, the error_rate goes up a little in the last4 epochs …
I’ll wll apprecciate any feedback about my models.
Jonathan, from Chile