Productionizing models thread

A bit of a late comment, but this is how we use ULMFiT in production: https://github.com/inspirehep/inspire-classifier
It’s still based on fastai v0.7 (which we might change soon to v1). We deploy the whole thing to OpenShift and use a REST API for sending text data and get the classification scores back. It’s slow as the OpenShift instance is CPU-based, but we are trying to work around that.

Of course I would appreciate any feedback and comments, especially on if we can do better.

1 Like