Calculating the Accuracy for test set

I was looking a way to print the overall accuracy of the model WRT to my test set (with labels). Turned out in Fast.ai the test set is always unlabeled, so this is not possible directly, but one has first to create another DataBunch (or replace the validation set in the existing one).

Anyway, also with another DataBunch, I didn’t find the way to print the accuracy, so I came up with this function.

@muellerzr: Thank you for this solution.

I was wondering whether I can also use .to_fp16() here?

I tried the following which did not work:
learn.data.valid_dl.add_tfm(to_half) = data.valid_dl.add_tfm(to_half)

When I “splitted” it into 2 lines, it seemed to work: Is this right?

xz = learn.data.valid_dl.add_tfm(to_half)
xz = data.valid_dl.add_tfm(to_half)

Does the complete example below look right? (If it’s okay that I ask :slight_smile: - I am afraid to do something wrong and get predictions that are somehow wrong in reality).

And I also have a “is_valid” column with only "True"s in my df: It does not matter whether such a third column is still inside the test set, does it?

And is it important that batch size here and in train/valid set are the same?

Thank you for your work! :slight_smile:

#Complete example from me:
    il = ImageList.from_df(df_test, path = '/home/name_folder')
    ils = il.split_none() #All data on Train Set
    ll = ils.label_from_df(cols='label', label_cls = CategoryList)
    ll.valid = ll.train # @muellerzr Trick!
    ll.transform(tfms=get_transforms(flip_vert=True, max_zoom=1., max_warp=None),size=256) # Optional Transforms
    data = ll.databunch(bs=120);
    data.normalize(imagenet_stats)

    xz = learn.data.valid_dl.add_tfm(to_half)
    xz = data.valid_dl.add_tfm(to_half)

    # Interpret
    interp = ClassificationInterpretation.from_learner(learn,ds_type=DatasetType.Valid)
    interp.plot_confusion_matrix()