From the first lesson, when I usually do a learn.fit_one_cycle(...)
metrics train_loss
, valid_loss
, error_rate
are printed out.
But if I’m loading a model from a previously saved model, how do I calculate just the metrics again, or directly print them if they are stored with the model? I don’t want to retrain, the model, I just want to print out the metrics for the loaded model. And if I’m understanding it correctly, calling fit
or fit_one_cycle
will retrain the model for an epoch.
Check out the page from PyTorch tutorials
It shows that you can save loss during model.save and load it back again. I think it should work with fastai models as well, but not sure, you might have to try that out.
Isn’t there a way to just re-calculate them on the data set. Maybe as a part of the ClassificationInterpretation object ?
I had the same problem and used the confusion matrix from the ClassificationInterpretation object to calculate the error_rate
Can you share your code here?
yeah, sure. just use this line:
round(1-sum(interp.confusion_matrix().diagonal())/interp.confusion_matrix().sum(),6)
Has anyone figured out how to do it in fast ai, or that if its possible?
Hello guys, I think I found a solution to the initial question.
You can use the learn.validate() method with arguments regarding the training or validation data loader, setting callbacks=None and metrics=[error rate]
This will print a list containing the training or validation loss depending on what data loader you chose and the error rate for the corresponding data loader.
I found the solution here : https://docs.fast.ai/basic_train.html#Learner under the validate section
and I also post an image of my implementation to help you where my metric is accuracy instead of error rate