Imdb-lesson4 ... What is good enough, encoder wise?

Having halved the # of epochs, I’m getting a validation loss of ~ 4.21

From the notebook:

Language modeling accuracy is generally measured using the metric perplexity, with is simply exp() of the loss function we use:

math.exp(4.21) = 67.35

How do I know if that is good or not?

For language modeling, is there a methodology for determining good enough or is it more art than science?

Best way is to just try it in your model and see if the results are good. Also, you can look at language model research papers to see what kind of numbers they report for “perplexity”. Earlier this year anything <80 was state of the art IIRC! Although some datasets are easier than others, of course.

1 Like

Any recommendations on the paper front?

When I played around with generating a few sentences with the language model, I noticed a good deal of repetition (esp. with smaller primer sentences). Would this incline you to believe the model has a ways to go?

No it wouldn’t make me think that. Your perplexity looks great to me. As mentioned in class, we haven’t actually tried to create a good generator - our goal was to create a good classifier! To make a better generator you’ll probably want to use beam search and other tricks. One simple step is: Configuring stateful lstm cell in the the language model

2 Likes

Ok thanks.

One last question for the evening … are there any special best practices for building a validation set for language models?

Just wondering if holding back 20% is a pretty standard practice here or if this particular problem space demands other strategies.

I can’t think of any subtleties here.