Using a labeled test set with fastai?

Why does it make sense to not have labels for your test set? For example if I train my model on a train and validation set, and I want to finally run it on a test set how do I do this?

Do I have to make a new databunch and use this as my data?

Do I run thislearn.get_preds(ds_type=DatasetType.Test)? it gives me the probabilities and I can run argmax on it. But why not include the labels to just evaluate it?

Do I create a new databunch for the test set and pass this into validate like this: learn.validate(test_data.train_dl,metrics=[accuracy])? What if I want the probabilities? I cant do learn.predict() nor can I do learn.pred_batch

What is the correct way to predict on a test dataset (with labels) and get its probabilities for further analysis such as f1_score (which is also not usable when training on non-multi-class data)?

1 Like

This question has been asked and answered already… See here how to create a second DataBunch when you want to validate you model on another set.
As for the F1 score, it is available on non-multi-class data if you use FBeta(beta=1) (another question that has been asked a lot on the forum).

Why does it make sense to have no labels for your test set? Well, when you are in a Kaggle competition, your test set doesn’t have labels and you need to provide predictions. The fastai test set is there for that. If all you want to do is validate, use the validation set, with another DataBunch if you have another validation set.

Well, I wonder why there isn’t an evaluation function. similar to how Keras allows for one to evaluate the test set with labels. The fastai library has this extra step that we have to make a new data bunch with training data and swap out the validation step. It seems awkward to force users to make a new databunch, put in the same training data, and insert the test data as validation data. Wouldn’t is make more sense to have a databunch containing a train, validation, and test set allowing for the test set to also have labels so we can evaluate the model on that test set?

It just seems like the fastai library is tailored for kaggle competitions rather than being more general purpose (not that its a bad thing, its just something I observed). I guess I could simply use pytorch for a more general purpose applications.

Either way, thanks for your suggestions. Helps out a lot!

11 Likes