Could we pin this topic to the top? A very interesting topic for anyone who wants to bring ML from notebooks into the real world!
Also, I wonder, is it possible to somehow “extract” preprocessing pipeline from the learner? I guess this question was already discussed a lot in the thread about ConvLearner. However, it would be really handy to have something like:
# on a training machine learn = ... # trained model learn.export(model_path) # on server inference = fastai.load(model_path) test_images = load_data(path) predictions = inference(test_images)
Of course, it is just an example but would be great to have something similar. The main idea to be able to load the whole pipeline without a need to re-create data loaders, transformations, etc. Like, having a single “executable”-like to keep everything required to run the model. Like, computational graphs in TF.
Also, does anybody tried to use
learn.model directly, i.e., pytorch model itself? I remember that I had some problems when tried to load image manually (using PIL) and feed into the model. I’ve normalized data but the predictions were wrong. I guess I’ve missed some other preprocessing steps.