How to interpret IMDB sentiment predictions?


#8

I found incomplete predictions too and I think it’s because of the rounding when splitting data into batches. My hacky solution is too append some redundant data at the end of the dataframe so that it won’t cut my real data.

Another thing I noticed is that there may be some shuffling involved in the batches and as a result the prediction output do not follow the original order. This is very annoying and my (another) hacky solution is to set the prediction batch size to 1 to preserve the original order.


(Rob H) #9

Thank you for confirming I’m not the only one seeing those two issues!

I feel both of these should be handled by the library by default when predicting on test data.


(WG) #10

I’m thinking the same because when I run

pre_preds, y = m3.predict_with_targs()

The y values do not match with my validation dataset order.


(Rob H) #11

Hi @runze, how did you set the prediction batch size? predict() won’t take bs as a parameter and setting m.bs=1 didn’t work either

Do I need to recreate and load the model with new params?


(WG) #12

See below (looking at spooky comp.):

y is supposed to be the actual targets … but notice that the first two are both 1’s. And yet, when you look at the first 2 examples in my validation datasets (btw, I’m using my training as my validation dataset for testing), the actual classes cannot both me 1.


#13

Hey, I didn’t change the model structure - I just modified this line when defining TextData.from_splits from

trn_iter,val_iter = torchtext.data.BucketIterator.splits(splits, batch_size=bs)

to

trn_iter, val_iter, test_iter = torchtext.data.BucketIterator.splits(splits, batch_sizes=(bs, bs, 1))

You can see from the source code here that torchtext.data.BucketIterator.splits actually takes in a batch_sizes tuple argument that defines batch sizes for different datasets.


Lesson 4 In-Class Discussion
#14

Yes, if no shuffling is involved, torchtext sorts data by the word length of the text object (because we define sort_key as def sort_key(ex): return len(ex.text), as in cell 115 of this notebook), and in the case of ties, it breaks them by preserving the original order. So I would sort my data by those two factors too. It does that because it tries to group texts with similar lengths together in a batch to feed into the model.


(Rob H) #15

Oh sweet! Yeah I had also modified the source to return an additional iterator for test, otherwise it would break.

We should create a PR for these changes, you think?


(WG) #16

Yup. I was just looking at this.

Was trying to set it = 1 but keeps throwing an exception.


#17

Absolutely!


(Rob H) #18

Okay, do you want to do it? It’s 4am here for me …

At least one other person is also getting this error


#19

I don’t think I’ll be able to get to it today though (have a bunch of errands to run). Some time next week, realistically.


(Rob H) #20

Okay, I’ll try to work on it later too. Or maybe someone else will get to it.

I still didn’t get the ordering to work. I trust that your method works as you say. Do you see anything wrong with my code?

full gist


Summary

nlp.py

    trn_iter,val_iter,test_iter = torchtext.data.BucketIterator.splits(splits, batch_sizes=(bs, bs, 1))

in my notebook:

md2 = TextData.from_splits(PATH, splits, 1) #setting all bs to 1, just for making predictions
m3 = md2.get_model(opt_fn, 1500, bptt, emb_sz=em_sz, n_hid=nh, n_layers=nl, 
           dropouti=dropouti, dropout=dropout, wdrop=wdrop,
                       dropoute=dropoute, dropouth=dropouth)
m3.reg_fn = partial(seq2seq_reg, alpha=2, beta=1)
m3.load_encoder(f'h2_adam1_enc')
m3.load_cycle('h1',3)
val_preds,y = m3.predict_with_targs()

res = np.argmax(val_preds,axis=1)


#21

Did you try to sort val_df by text length as discussed above? I might have missed it in your notebook but this is what I did (my dataset is called txt_test and my text column is text):

# Sort by len
# Because that's how torchtext would sort it,
# Hence need to do the same in order to match its results
txt_test['text_toks'] = txt_test['text'].apply(spacy_tok)
txt_test['text_len'] = txt_test['text_toks'].str.len()
txt_test['index'] = txt_test.index  # Note this is assuming that the data is already sorted by index; if that's not true, use `.iloc` instead

txt_test.sort_values(by=['text_len', 'index'], inplace=True)
txt_test.reset_index(drop=True, inplace=True)

Btw, here is my full notebook, which I hope is right.


(Jeremy Howard (Admin)) #22

FYI as I’m sure you’ve noticed, I haven’t used a test set with this class before - sorry about the shuffling thing! I’m working on tomorrow’s class at the moment so won’t be able to debug right away, but if you want to do so, try looking at how torchtext is handling this. I’m not sure if the issue is in torchtext, or just how I’m calling it.

Both torchtext and fastai are pretty simple code to read - hopefully it’ll be reasonably clear what’s going on. Let me know if I can help clarify anything!


(Rob H) #23

@KevinB was able to get a submission into the Happiness competition, so I think he has something that works, and I don’t think it’s as complicated as we are making it. Perhaps he can enlighten us when he has a chance.


(WG) #24

I think the issue is in torchtext.

If you look at the source code for BucketIterator here, you can see that it always sorts (even if you set sort=False. That simply, shifts the sorting to happen in the batches.


(Kevin Bird) #25

So all I did is use what Jeremy did in his lesson 4 notebook to predict what the sentence will be. I set my batchsize to 1 and I pulled the text from the CSV file directly. Then I just looped through those one at a time and tied them to a file. Then I just chose the top prediction and converted it from the index to the actual word. Is there any specific code/questions you are wondering about?


(Rob H) #26

Can you share the code for how you loop through examples to do prediction one by one?


(Kevin Bird) #27
m = m3.model 
m[0].bs = 1
for i in range(tst.values[:,1].shape[0]):
    ss = tst["Description"][i] #Actual text review
    s = [spacy_tok(ss)]
    t = TEXT.numericalize(s)
   
    m.eval()
    m.reset()
    res,*_ = m(t)
    prediction = PH_LABEL.vocab.itos[to_np(torch.topk(res[-1], 1)[1])[0]]

Kaggle Comp: NLP Classification
How to configure labels in the last layer of NLP model
NLP - Individual Prediction (lesson 4)