is there a simple way to evaluate a model on test set ?
Essentially this is like your deployed-runtime-mode.
Use get_preds() as suggested in this answer in your other thread
and also described for Batch Prediction
Now I just want to check you are familiar with fastai context for these terms, watch Lesson 4 at 45:11 where Jeremy advises “perhaps the most important idea in machine learning is the idea of having separate training, validation and test data sets…”