Non-deterministic ULMFiT LM inference scores

Hi!

I’m running the pretrained wt_103 model on a batch of sentences and each time, the inference score is a bit different - I’ve tried to disable dropouts etc. by doing model.eval() and model.reset(), but to no avail - the “spread” is smaller, but it’s still kind of random. Do you have any suggestions how to achieve deterministic predictions?

Thanks!

A gentle reminder - in case you have no idea or no interest in this issue, it’d also be good to have this feedback :slight_smile: