ULMFiT Classification Seperation

I’m using Jeremy’s Language Model fine tuning technique (ULMFiT) for a classification problem on a small dataset of some 3000 entries.

I pre-trained a language model on WikiText-103 with the following model parameters:

BPTT: 50; EM_SZ: 300; NH: 800; NL: 3

While the predictions I get are largely correct, an observation I made is that the prediction probabilities (of the correct class) are extremely high.

This is, of course, desirable, but an unexpected effect is that I get high confidences for predictions on arbitrary sentences (ie., out of context sentences in a domain specific case).

Has anyone else made any similar observations? Any ideas on what might be going wrong?

@aayushy you mean, that when you put arbitrary sentence to your classifier it always predict one specific class? And predicted class is not depends on sentence?

Exactly. Say if it were a simple happy/unhappy sentiment classification task and I supplied the input text: “sun rises in the east”, it will predict happy with a probability > 0.99

Another observation is that while shorter sentences (one to three word length) have somewhat conservative confidence values, longer sequences are always predicted with very high probabilities.

I think the problem with your language model. What is a perplexity score on your language model? Does you test it predicting next word, is it behave as expected?

I’ll run these tests and update. Thanks!

The pre-training perplexity is pretty decent at about 69.24 and the fine tuned LM is at 55.1

Now I was looking for similar questions on the Internet and I found Jeremy’s tweet. Where he says:

“Classification isn’t measured by perplexity.”

Which makes me wonder if it’s possible that the layers responsible classification in the imdb_scripts (which is my main reference) are not learning a good classifier.

Thoughts?

I’m really not an expert but it seems a bit high perplexity value, maybe that’s got to do with your corpus being small. Have you tried pretrained wiki or other models? I thought the language model method from AWD LSTM is supposed to get those values below 30-40.

It’s possible that I’m not training to convergence. I’ll try some hyper-parameter variations.

Update

Language Model perplexity: 72.1; Fine tuned Language Model’s perplexity: 24.9

@asotov @Gabriel_Syme any ideas?

Have you tried classifying with the fine tuned model? Did resutls get better? Comparing the two might be interesting as well, testing the statement above (although it’s only one case but still).

Very similar results. I get slightly lower confidences on sentences with out-of-vocab words, but out-of-context sentences are still predicted with high confidence.

One way to fix this could be by populating the data-set with arbitrary sentences and labeling them as such, but I still feel there must be some way to get better separation with what we have.

Isn’t this mostly how the softmax function works?

I think something like this might be what you need (the density function, not the adversarial example stuff): https://arxiv.org/abs/1707.07013

1 Like