Are the learner metrics calculated on training or validation set?

On initialising the metrics for the learner object, what dataset (training or validation) are the values calculated on?

Basically what I’m asking is when I see these training statistics as below …

Epoch      trn_loss   val_loss   accuracy   precision  recall                    
    0      1.06871    3.607499  0.432395   0.113374   0.316821  
Epoch: 100%|██████████████████████████████████████| 1/1 [00:13<00:00, 13.81s/it]
Epoch:   0%|                                      | 0/1 [00:00<?, ?it/sepoch      
           trn_loss   val_loss   accuracy   precision  recall                    
    0      0.887629   1.399241  0.582809   0.22717    0.390596 

… are the accuracy, precision and recall on the validation set?

Secondly, whichever set it is on is there a feature to also evaluate these statistics on the other set?

1 Like

I am unsure but you should be able to find out by inspecting and understanding the fit function.

The fit function is in fastai/model.py. Use ??fit

You can also change the code of fit or use callbacks to “evaluate these statistics on the other set”.

I think the metrics are on validation set because print_stats is called with vals.

That is the first thing I tried but there’s a lot of clutter and a lot of unexplained usage. This library could really benefit from a good documentation.

Like what is all_val, what is the reason for assigning a default to it instead of adding it to the list of arguments? Why is there a validate and a validate_next?

It is pretty messy!

You can test your hypothesis that “the metrics are on validation set” by:

Make your validation set only one image (or whatever your data is like). Then look at the accuracy when training. If the hypothesis is correct you should only see either 100% accuracy or 0% (1. or 0.).

To change whether valid or train the statistics are evaluated on, I would rather implement my own fit function because fit really is messy. But if anyone can help you with that then that would work too.

In one of the lessons, I am sure we implemented our own fit function. Just copy that then edit it so it does what you want. I do realize this is a lot of work :slight_smile: so if anyone knows a better solution do tell @aayushy

1 Like

I had the same question today,
I trained a model with no validation set (split_none()), and the metric was not displayed, and when I add the validation I start seeing the metric score at the end of the epoch, which indicates that the scores are calculated on the validation set

2 Likes

I also have this question.
Edit: We can find the answer ‘Note that metrics are always calculated on the validation set.’ on this page: https://docs.fast.ai/training.html.

3 Likes