Loading your models for inference and classification

#1

After building my model using my GPU and my 1.0.46 fastai installation, I am now trying to do inference using another machine that has only a CPU and 1.0.57.dev0 fastai installation.

In my effort to load the data and after my fastai imports along with my definition of the
path = Path(‘path/to/my/data.pkl file’), I am trying to load the data in the following way:

data = load_data(path=path, fname=“data.pkl”)

When I try to do that I get the error:

FileNotFoundError: [Errno 2] No such file or directory: ‘path/to/my/data_save.pkl’

When referring back to my GPU machine I am able to locate this data_save.pkl file. If I copy that file to my CPU machine and try again to issue data = load_data(path=path, fname=“data.pkl”)

I now receive the following error:

TypeError: intercept_args() got an unexpected keyword argument ‘fname’ and the error seems to be in

~/fastai/fastai/text/data.py

Why data_save.pkl is needed to load the data? and why when I copy data_save.pkl over to my CPU machine, I get the TypeError? After all do I really need to load the data to do inference or I only just need the model?

Is there a complete example of what do I need to export after the training phase(s) is(are) complete to be able to do inference (and/or) classification?

I have seen in the videos that there are different things that you can do (export the encoder when you want to do classification, save a learner after fitting, save a databunch after creating it in order to save time next time you want to resume work, etc.) but is there somewhere a minimal example that describes how I can use the models I build? There are different also files that are created while someone saves a data-bunch, a learner, and a model. I can see *.pkl files created, but also *.pth files inside the models/ directory. How are these connected and what is needed each time?

Thank you and sorry for the many questions in one post. Thanks

0 Likes

#2

Hi, @pebox.

Check out BentoML. We made it really easy to serving and operating ML model.
With BentoML, you won’t need to export your model by hand and try to recreate it in your inference code. You just need to create a BentoML spec, save it along with your model. After that, you can use it in a lot of different serving scenarios without worry about dependencies!

Here are two examples for fastai. Pet classification and Tabular CSV: https://github.com/bentoml/gallery/tree/master/fast-ai

You can try to pet classification example in Colab here is the link: https://colab.research.google.com/github/bentoml/gallery/blob/master/fast-ai/pet-classification/notebook.ipynb

0 Likes

#3

Thank you @yubozhao,

I was hoping to stay local to fastai in order to reuse the models. Your BentoML spec looks promising if you keep developing it. So what is the usage of a data_save.pkl and of the .pth files created when you create a data bunch and you save a model after a fit? Why do I get these errors?

Thanks

0 Likes

#4

Hi @pebox, Thank you for checking out BentoML.

I am not sure I understand the context/meaning of stay local to fastai, can you tell me more about it?
Do you mean you want to use the fastai’s method to load model?

For BentoML, we don’t do anything magical. We use fastai’s export model. That will create a pytorch pickle file with all of the current state of learner. This is the file you need for inference.

In Pytorch. Save/load entire model, not just the state_dict it is typically saved as PTH file. This is going to take a lot of space compare to .pkl file. It is not the recommend way for using inference.

Now, with those understanding of pytorch’s save/load. We can look into fastai. When you want to inference, you can call learner.export, it will create .pkl file. That file will have the stat_dict of your model.
When you call learner.save it will create .pth file. It will save the entire model, so we can continue train and other tasks at later time or different machine.

Does that answer some of your questions?

Best

Bo

0 Likes

#5

Hello @yubozhao, Yes some are answered. However, why and where data_save.pkl is needed and why the errors (see original post)?
Thank you for your answer

0 Likes

#6

So @yubozhao,

you said that learner.save creates a .pth file and saves the entire model, but learner.export creates a .pkl file that contains the stat_dict of the model. What is the difference between the .pth and .pkl file? Is it only the encoding/compression or is it that .pth contains more things needed for training but not needed for inference?

0 Likes

(marco) #7

I think you must rename the pkl file to “export.pkl”. Otherwise it won’t work. At least this was the case for me.

0 Likes