How to evaluate fastai model?

I’m new to fastai and don’t quite well grab some concepts.

There are some questions I have :

When using fit_one_cycle, a result table is obtained at the end.
Screenshot from 2022-08-01 17-25-35
How is the error_rate in this case calculated ( train or validation set ) ?
(It looks like it is on the train but I couldn’t find an official description of the content of this table)

If it is calculated on train set, how come a lot of tutorials (another tutorial) use it as measure of performance of the classifier ?
If not, how can I get performance on validation and test sets separately (in a fastai way if it exists) ?

PS: I spent a fair amount of time trying to find answer without stumbling on reliable content so I hope you don’t mind answering a “stupid” question.

It seems you’ve come to fastai through a 2020 tutorial, so I’m guessing you haven’t discovered the 2022 Fastai Course that was released a couple of weeks ago. I suggest you will find your answers & more by watching those videos, and reading the book.

1 Like

The metric error_rate is calculated on the validation set. The Recorder callback is responsible for tracking the loss & metrics during training by attaching to the Learner. And its default is to calculate the metrics on the validation data as can be seen from the signature of the constructor.

learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.recorder.train_metrics = True
learn.fine_tune(3, 0.1)

Now both the train & valid error rates will be shown.

2 Likes

Can you please tell me how to evaluate on test set ?

PS: I am quite puzzled how it is difficult to get answers to such simple questions. I’ve seen the tutorial hinted by @bencoman but that looks like a lot of work for a framework change.

just to clarify - I’m a novice at this, having just completed the Part 1 lessons. I didn’t have a specific answer but a memory of a good answer in the Lessons.

Can you please tell me how to evaluate on test set ?

Searching for: fastai error rate “test set”
I found Calculating the Accuracy for test set.
HTH

1 Like

For evaluation on test set use learn.get_preds() passing in dataloader created using test_dl,

learn.dls.test_dl??

“Create a test dataloader from test_items using validation transforms of dls

tdl = learn.dls.test_dl([“test/bird.jpg”, “test/forest.jpg”])
learn.get_preds(dl=tdl,with_decoded=True)

Check this notebook
Also this blog

1 Like