BERT vs ULMFiT faceoff

Hi, when pytorch-transfomers was released, I wanted to try it out to see how well it performs and how convenient it is to use. So I got started, one thing led to another and ended up building this BERT vs ULMFiT sentence classfier (trained on IMDB dataset) web app.

It is deployed on Google App engine. I have also added an option to provide feedback on reviews where a model does really well (or not) which we can then further analyze.

Here’s the link: https://deployment-247905.appspot.com/.
Give it a spin and let me know if you have any comments or feedback. Thank You.

6 Likes

This is really great. Can you give me some links on where you learnt to build this app?

Nice web app! Did you really trained the ULMFit model well??

Short answer: those 100+ tabs opened in the browser. :slight_smile:

The code is on Github (need to update to latest. Will do so today) and I’ll write a blog post as well.
Will that suffice for you?

I ran the same prediction in the official IMDB notebook and it gave me same prediction. Are you observing anything different?
My guess is that ULMFiT doesn’t like short sentences. If you add something more then the sentiment may turn positive.

1 Like

Yep, ULMFit is working well on long sentences. Waiting for the blog-post!!

Here it is: https://towardsdatascience.com/battle-of-the-heavyweights-bert-vs-ulmfit-faceoff-91a582a7c42b.

2 Likes

I ran into issues with ULMFiT using short sentences too. “I farted because I laughed so hard” it thought was a very negative sentiment, whereas BERT was a positive prediction.

1 Like

found a way to fool BERT

2 Likes