Lesson 4 Impact of sentence order on text_classifier_learner

I tried to implement a classifier to classify a short paragraph (a course/syllabus description) into one of four top level subject areas: English, Math, Science, Social Studies.

It’s basically just a copy of the IMDB sentiment classifier from Lesson 4, but trained on my own data which is about 20,000 learning outcomes labeled by the four subject areas.

I was initially extremely excited by the accuracy of the results, but as I test on more unseen data, I’m finding that slight variations in sentence order can cause the classifier to completely flip it’s recommendation, and I’m wondering if this might be an indication of something I’ve done wrong somewhere or is just an artifact of building a classifier on a language model.

I tried predicting similar ambiguous sentiments directly on the Lesson 4 notebook with the IMDB data and I see some strange results, but it seems to do better than my subject classifier.

As an example of the kinds of issues I’m seeing, the following paragraph is predicted as English (.9708).

The third of four basic mathematics courses introduces ratios, proportions, percentages, metric conversions, graphs, tables, and topic-related problem solving. Developing learning strategies is also an important component of this course.

However, if I switch the sentence ordering then the prediction changes to be very positive Math.

Developing learning strategies is also an important component of this course. The third of four basic mathematics courses introduces ratios, proportions, percentages, metric conversions, graphs, tables, and topic-related problem solving.

I do see similar inconsistency with some IMDB reviews which are written to be deliberately ambiguous, but in this case the first sentence seems to drive the class whereas for my classifier/examples it was the last sentence.

“It was awesome! I hated the movie.”
(Category tensor(1), tensor(1), tensor([0.2067, 0.7933])) (Positive)

“I hated the movie. It was awesome”
(Category tensor(0), tensor(0), tensor([0.9858, 0.0142])) (Negative)

But in contrast the following ambiguous reviews keep their class:

“I hated the movie. But I loved the acting”
(Category tensor(1), tensor(1), tensor([0.1905, 0.8095])) (Positive)

“I loved the acting. But I hated the movie”
(Category tensor(1), tensor(1), tensor([0.1182, 0.8818])) (Positive)

As additional background, when I trained the language model part of my network, it had an accuracy of 0.603887 which I thought was amazingly high (the IMDB trained model only had about 0.3)

Also, my classifier’s validation accuracy after the third training cycle (using the same approach as the IMDB example) was also a fantastic 0.963250.

On a lot of unseen data I’ve tried, the classifier works really well but then examples like the above make it somewhat unusable, as I have no ability to control what is in the course description and they will often have more ‘Englishy’ phrases mixed in. Maybe I am expecting too much?

1 Like