I think the metrics are on validation set because print_stats is called with vals.
That is the first thing I tried but there’s a lot of clutter and a lot of unexplained usage. This library could really benefit from a good documentation.
Like what is all_val, what is the reason for assigning a default to it instead of adding it to the list of arguments? Why is there a validateand a validate_next?
You can test your hypothesis that “the metrics are on validation set” by:
Make your validation set only one image (or whatever your data is like). Then look at the accuracy when training. If the hypothesis is correct you should only see either 100% accuracy or 0% (1. or 0.).
To change whether valid or train the statistics are evaluated on, I would rather implement my own fit function because fit really is messy. But if anyone can help you with that then that would work too.
In one of the lessons, I am sure we implemented our own fit function. Just copy that then edit it so it does what you want. I do realize this is a lot of work so if anyone knows a better solution do tell @aayushy
I had the same question today,
I trained a model with no validation set (split_none()), and the metric was not displayed, and when I add the validation I start seeing the metric score at the end of the epoch, which indicates that the scores are calculated on the validation set
I also have this question.
Edit: We can find the answer ‘Note that metrics are always calculated on the validation set.’ on this page: https://docs.fast.ai/training.html.