is there a simple way to evaluate a model on test set ?
Essentially this is like your deployed-runtime-mode.
Use get_preds() as suggested in this answer in your other thread
and also described for Batch Prediction
Now I just want to check you are familiar with fastai context for these terms, watch Lesson 4 at 45:11 where Jeremy advises “perhaps the most important idea in machine learning is the idea of having separate training, validation and test data sets…”
get_preds help obtaining predictions on test set but it doesn’t help for accuracy calculation on test set. if it does, can you please provide me some code snippet to do so.
How can I get accuracy (directly) on test set ?
I really don’t understand why there’s such a burden to this question (I am not the first to ask apparently), accuracy on test set is a proxy to how well the model will perform on production and validation accuracy is not sufficient to tell it because we could have overfitted hyperparameters on it.
Why is data.test not available like data.train and data.valid ?
I tried loading test set in a new dataloader but results are not consistent.
For some reason it still needs a bit of collecting different pieces of the puzzle, but in the end this worked for me. After the standard training of the model with a train and valid data set, you can get the accuracy with a new set of data (test set) as follows:
I have tried your code and I am little bit confused, in the last line you say that acc equals 1 - acc2[1]
is that referring to how accuracy is calculated or that in order to determine real accuracy I need to actually subtract the second value of acc2 from 1?