Chapter 12 is great, because it shows us how to build an NLP model from scratch.
I have been trying to get predictions from all the different iterations of models but has been hitting a wall since from model 3 onwards. To avoid any issue in tensor size, I use the below code to simply use the first training data as input sequence to get predictions.
input_seq=learn.dls.train_ds prediction, decoded_prediction, fully_decoded_prediction = learn.predict(input_seq) print(prediction)
Such an approach has been working for LMModel1 and LMModel2. But it hit a very strange error (ValueError: not enough values to unpack) from model 3 onwards. Anyone facing the same issue or found a solution?
Below is the error stack.
/opt/conda/envs/fastai/lib/python3.8/site-packages/fastai/learner.py in predict(self, item, rm_type_tfms, with_input) 248 def predict(self, item, rm_type_tfms=None, with_input=False): 249 dl = self.dls.test_dl([item], rm_type_tfms=rm_type_tfms, num_workers=0) --> 250 inp,preds,_,dec_preds = self.get_preds(dl=dl, with_input=True, with_decoded=True) 251 i = getattr(self.dls, 'n_inp', -1) 252 inp = (inp,) if i==1 else tuplify(inp) ValueError: not enough values to unpack (expected 4, got 3)