Is there a way that we can save model architecture and model weights into one file? And at the inference time, we can simply load that file and do the prediction?
Hi Frank,
Are you looking for learner.export()
and load_learner(FILE_PATH)
?
Under the hood. learner.export
is grabbing bunch information into dict and use pytorch’s torch.save
to save this dict as a pickle. And when we call load_learner
. It will call torch.load
and pop each attribute one by one to ‘restore’ the state.
You can check out this file to see what the export function stores in the pickle file, https://github.com/fastai/fastai/blob/5b11f1f864c200623a40565f2efb8c3ba6d334ef/fastai/basic_train.py#L232.
args = ['opt_func', 'loss_func', 'metrics', 'true_wd', 'bn_wd', 'wd', 'train_bn', 'model_dir', 'callback_fns']
Let me know is this what you looking for. I am pretty new with fast.ai as well. Feel free to correct me
Cheers
Bo
Thanks! That is really usefull.
But i am looking for sth that can save model and weights into one single file
It is the learn.export() he described above.
https://course.fast.ai/deployment_render.html
The above link shows a good example of how to deploy, and all you need is learn.export
. It saves everything into a pickle file that you can load and unpack.
Eg: learn.export('myModel.pkl
)
If you see lines 32 on on the server.py you will see we can call load_learner
where we give it a path, and the file we want to get (that pkl file)
And then if you see line 59-64, we get an image and can run learn.predict()
This is synonymous for any deployment platform you want to do, even if you want to do it in a python notebook. Consider load_learner
an equivalent to cnn_learner
where we can get all of our information from that pkl file. By information I mean we load an architecture with the weights we saved into the export()
Hope that helps!