I’ve successfully trained a seq2seq model based on the instruction in lesson 11. However I struggle to find an example of using this trained model to predict new raw manual input. I can find examples of using language model and image classifier but the API don’t carry over. Appreciate it if anyone could point me in the right direction. Thank you
what do you mean by “raw manual input”, SeqToSeq can be used for many things, in images, NLP. What problem are you trying to solve?
I have the same problem, it seems in the seq2seq collate function need both input and target and in the inference time we don’t have the targets.
here is my collate function:
def seq2seq_collate(samples:BatchSamples, pad_idx:int=1, pad_first:bool=True, backwards:bool=False) -> Tuple[LongTensor, LongTensor]:
“Function that collect samples and adds padding. Flips token order if needed”
samples = to_data(samples)
max_len_x,max_len_y = max([len(s) for s in samples]),max([len(s) if type(s)== str else 1 for s in samples])
res_x = torch.zeros(len(samples), max_len_x).long() + pad_idx
res_y = torch.zeros(len(samples), max_len_y).long() + pad_idx
if backwards: pad_first = not pad_first
for i,s in enumerate(samples):
res_x[i,-len(s):],res_y[i,-len(s):] = LongTensor(s),LongTensor(s)
res_x[i,:len(s):],res_y[i,:len(s):] = LongTensor(s),LongTensor(s)
if backwards: res_x,res_y = res_x.flip(1),res_y.flip(1)
return res_x, res_y
any help is appriciated.