Can't replicate ULMFit validation predictions

Maybe? It makes the code a lot more complicated, and there is the existing TestSetDataLoader 9or whatever it is called) which does exactly that.

I don’t think there are any examples of making a single prediction as you would need in an interactive application.

As said above, I think predicting on a single text input would definitely be useful.

@nickl, did you add those functions to the model class or why are they using self as argument? It might be better to just add them to the script for now. Also, you should be able to just load the final model, without loading the encoder and classifier separately.

For the tokenization, you probably don’t need to partition by cores for a single text input.

Looks good otherwise.

I’m traveling from today for a week, so will be less responsive. Feel free to submit a PR once it’s ready and I’ll take a look at it once I’m back or someone else can in the meantime.

@sebastianruder I pulled that code from another thing I’m working on, which is where the self arguments come from. I’ll clean that up.

I’ll test loading the final model only. I thought I tried it and it failed, but I don’t remember the specifics.

If you are coming to ACL then welcome to Australia! I’m sadly in both Adelaide and Sydney while it is on but not in Melbourne - otherwise I’d buy you drinks/coffee/something.

Pull request with text prediction script available at https://github.com/fastai/fastai/pull/641

Tagging @sebastianruder

I think these two results may be different because rx[:, :1] will have paddings before the text?