Get_preds is confusing!

I found applying a learner to a test set and getting accuracy is confusing in fast ai:

Here: https://docs.fast.ai/data_block.html#LabelLists.add_test_folder

It says:

Warning: In fastai the test set is unlabeled! No labels will be collected even if they are available.

And in this tutorial example:
https://docs.fast.ai/tutorial.inference.html#Text

You see:

learn = load_learner(imdb, file = 'export_clas.pkl')
learn.data.add_test(["That movie was terrible!", 
                     "I'm a big fan of all movies with Hal 3000."])

preds,y = learn.get_preds(ds_type=DatasetType.Test)
preds

In this example in fact y is a zero vector and so useless and misleading!

The question is how I can get accuracy for this test set?
As far as I investigated, I need to create the learner and probably call load, thus load_learner isn’t applicable here… I’m not sure though and I still didn’t find any tutorial or answer for this question!!

someone correct me if i’m wrong but i think the y is a cut and paste error in the docs.

i’ve never seen a get _preds function which returns anything except predictions and fastai’s doesn’t seem to be any different. it wouldn’t really make any sense for it to also return ys from there if you already had them.

i also don’t see any reason to think that your test set acc should be any different to your validation acc since they keep it all seperate for us, you can’t really mess it up.

i’d imagine that if you really want acc for a test set then you’d need to 1) have a test set with labels which fastai has ignored, and 2) pair them up with preds yourself.

1 Like

Fastai will not ignore your labels if you pass them in with it to test_dl. you set with_labels to True .

For get_preds specifically yes. It never assumes labels. If you truly want to grade a labelled test set, you should do learn.validate(dl=dl) instead

Also you’ve referenced v1 but you’re in the v2 sub forum, and this answer is geared towards v2.

3 Likes

i’ve never seen a get _preds function which returns anything except predictions and fastai’s doesn’t seem to be any different. it wouldn’t really make any sense for it to also return ys from there if you already had them.

Take this example which combine predictions of forward and backward language models:

pred_fwd,lbl_fwd = learn.get_preds(ordered=True)
pred_bwd,lbl_bwd = learn_bwd.get_preds(ordered=True)
final_pred = (pred_fwd+pred_bwd)/2
accuracy(pred, lbl_fwd)

I guess here y is useful because we can pair the predictions and actual labels. In fact, the reason that I am interested in having labels returned by get_preds is that I don’t want to pair them by myself or I don’t know how to do that…

Anyways, as @muellerzr said I used validate, but probably I can’t do the trick in the code above for averaging predictions, can I?

See this notebook here. It describes combining forwards and backwards models:

However what you described works, yes. (or it should). If it doesn’t, you should make extremely sure your validation set is the exact same.

3 Likes

Thank you, your code is similar to the piece of code I posted.

print(f'SentencePiece Forward and Backwards: {round_num(accuracy(((results[2] + results[3])/2), gt))}')

As I noted you train two classifiers and average the predictions of each.

However suppose that you trained two classifiers (bwd, fwd) and everything finished. You probably save them using export, right?? and then load them using load_learner…

You can use validate to apply them on a new test set. However, what if you wan to combine the accuracy of both and report that? Is averaging the accuracy of both classifier equivalent to averaging the predictions of each classifier for every test item and then calculating accuracy?

I found that one can still pass a test set to the learner and use get_preds for the predictions, but this time the y (actual labels) it returns is a zero vector, hence not reliable. However, I have the actual targets, but I’m not sure in which order the get_preds return the predictions… Anyways, I still preferred for the get_pred to work on a labeled test set, which fast ai says it’s not possible (why not!?)… To sum up, in the scenario above for production phase how do you combine the predictions of two classifiers?..

OOPS! sorry…

Okay!!

@muellerzr

I got my answer in this notebook you addressed in the other question, thank you!

data_test = (TabularList.from_df(df, path=path, cat_names=cat_names, cont_names=cont_names, procs=procs)
                           .split_none()
                           .label_from_df(cols=dep_var))
data_test.valid = data_test.train
data_test=data_test.databunch()

learn.data.valid_dl = data_test.valid_dl

preds, y = learn.get_preds(ds_type=DatasetType.Valid)
accuracy(preds, y)

In the example above, I used a new test set and called get_preds on it (since it’s now considered the validation data set for the learner!) And y now points to the actual labels (not a zero vector anymore)… that was a bit not straightforward, but finally I got the answer!

For me the most straightforward approach is this:

dl_test = learn.dls.test_dl(df, with_labels=True)
preds = learn.get_preds(dl=dl_test, with_decoded=True)
df['preds'] = preds[2]
df['targets'] = preds[1]
5 Likes

So how do you apply the learner in v2 to the test file?
I am still not able to find it!

This is in v1 fast.ai, right?