Chapter 12 is great, because it shows us how to build an NLP model from scratch.
I have been trying to get predictions from all the different iterations of models but has been hitting a wall since from model 3 onwards. To avoid any issue in tensor size, I use the below code to simply use the first training data as input sequence to get predictions.
prediction, decoded_prediction, fully_decoded_prediction = learn.predict(input_seq)
Such an approach has been working for LMModel1 and LMModel2. But it hit a very strange error (ValueError: not enough values to unpack) from model 3 onwards. Anyone facing the same issue or found a solution?
Below is the error stack.
/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in predict(self, item, rm_type_tfms, with_input)
248 def predict(self, item, rm_type_tfms=None, with_input=False):
249 dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0)
--> 250 inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
251 i = getattr(self.dls, 'n_inp', -1)
252 inp = (inp,) if i==1 else tuplify(inp)
ValueError: not enough values to unpack (expected 4, got 3)
This means, that
self.get_preds(dl=dl, with_input=True, with_decoded=True) will give you three variables, but you expect four: 1.
Try removing the underscore:
inp,preds,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True)
Thanks for your reply!
Actually, I know the literal meaning of the error. However, the get_pred code is written in the fastai library (within the predict() function), not my code. The fact that such an exception happens means that there is something wrong elsewhere.
The same fastai library predict() function does work perfectly fine for LMModel1 and LMModel2
PS: I did try to re-write the predict function myself though, but it doesn’t seem to solve the problem
Hi rshun, did you ever find out the cause of the issue? I am struggling with the same issue.
I have been experiencing same issue. I think is related to batch size. I think the default batch size when predict creates de test_dl is greater than one. When I create the test_dl manually with bs=1 and invoke get_preds it works fine.
Hope this helps.
I’m facing the same issue right now…
I tried to build the test_dl manually but then I have another issue from the dimensional weight.
I think the issue is coming from the item given inside the learn.predict(). And i’m not sure to totally get how the item should look like, on my side it correspond to a tensor that has the same number of channel and same size of all the tensor given in the training, but maybe it has to be different ?
If anyone has a clue that could be great !
Hi, had the same issue.
After looking in to the predict functions/options it worked for me by manually generating a test dataset using the following specs:
# Let's assume our model receives input 0, and the target is 1
random_sample = (tensor(), tensor())
# We need x number of samples (should be a multiple of the batch size you specified in the learner object, in the Notebook they suggest 64)
# Let us just replicate the random_sample tensor
test_data = [random_sample]*bs
# Use get_preds where dl points at a DataLoader object that we generate from our small test set
preds, targs = learn.get_preds(dl=DataLoader(dataset=test_data, bs=bs, shuffle=False, drop_last=False, num_workers=0))
I would like to add to this issue.
When running the examples of the Book. I can’t make predictions with the LSTM models we create.
x, y = dls.one_batch()
it trows an error: “ValueError: not enough values to unpack (expected 4, got 3)” but when running
learn.model(x) it outputs the tensor in the correct shape. However, when inspecting the predictions using
preds = learn.model(x).argmax(dim=2)
[vocab[n] for n in press]
They make absolutely no sense, stuff like: [‘thousand’,‘seven’,‘thousand’,‘seven’,‘hundred’,‘seven’,‘hundred’,‘seven’,‘hundred’,‘seven’…]
I am having the same issue; does anyone know how to use learn.predict() for the LSTM models? @jeremy @sgugger I wish this book had more examples on using learn.predict().
To be fair the inference api/pileine is realy obscure in the documentation. I am using pure torch for inference because i just can’t understand what the methods expect/do.
as @bobseboy mentioned, creating a custom dataset worked for me. Basic idea is to create a list of tuples (x, y) to pass as dataset in get_preds method
Here’s a basic python for loop for reference
x,y = dls.one_batch()
for i in range(64):
inp = x[i]
out = y[i]
dataloader = DataLoader(dataset=dataset, bs=bs, shuffle=False, num_workers=0)
preds, targs = learn.get_preds(dl=dataloader)