Need clarification on Fastai model deployment using Pytorch

I am trying to deploy a fastai trained resnet34 model into production by exporting as PyTorch model and later loading as PyTorch model in a production environment. I was suggested to use torch.save(learn.model.state_dict(), params_path) instead of torch.save(learn.model, path) to export model.

When I tried to load using the following snippets.

model = models.resnet34()
model.load_state_dict(torch.load(params_path))

However, It failed to load due to mismatch in keys throwing the following error.

RuntimeError: Error(s) in loading state_dict for ResNet:    
Missing key(s) in state_dict:
Unexpected key(s) in state_dict:

and Also the model which I trained is not exactly resnet34 as fastai replaces the output layer with the one that suits our problem.

Due to the above reasons, I decided to export both model architecture and parameters from fastai learner object.

exporting model

model = learn.model
torch.save(model,'fastai-model-arch.pth') #model architecture
torch.save(model.state_dict(), 'Stage-Resnet34-torch.pth') # model weights

Loading into production enviornment

model = torch.load(‘fastai-model-arch.pth’) #load model architecutre
model.load_state_dict(torch.load(‘Stage-Resnet37-torch.pth’)) #load model parameters

I got <All keys matched successfully> response back.

My question, Is it fine to export the model the way I did? or does this method have any drawbacks?. I am asking because I don’t see anyone suggesting this solution in the forums.

Looking forward to a positive response.

Hello @Skumarr53 , this is not an answer to your question, just for my own curiosity, is there a reason you do not just use fast.ai for the deployment? Whenever I use fast.ai for the training, I also use it for the deployment as well. I just create the learner and load up the exported model and proceed to do inference.

Hello Saribod, I know with Fastai’s learner, deployment is much easier than with PyTorch. But, I might have to share my work with other people who may or may not have Fastai library installed in their machine. So, I am putting extra effort to make it more user-friendly.

I have a question for you sir. Have you faced issues of any sort after deploying the model into production using a learner object?

1 Like

The issue with not using the fastai learner is you don’t get access to the preprocessing very quickly. But regardless, a Learner object is a wrapper of sorts. When exported it keeps your model, and any data statistics needed for your preprocessors so you can generate test data on the fly. To deploy in pure pytorch alone (though I’m unsure why you would when fastai’s overhead is so so minute), you’d need to save the model specifically (torch.save(learn.model)) should do the trick I think) and then you’d need your preprocessing functions in OpenCV or something for both resizing and normalizing you’re data to the training data’s statistics.

6 Likes

Also, otherwise I have examples for all types of models here: https://github.com/muellerzr/Practical-Deep-Learning-For-Coders/blob/master/DeployingModels.ipynb

Using fastai

2 Likes

Thanks a lot, Muellerzr, this notebook comes in handy for my task.

@Skumarr53 I have not had any major issues. I have a docker template I use and I just create a minimal model and load up the weights. The only recent issue I had was not using docker but instead testing out Google Cloud Functions to deploy the model and ran into space issues. The exported model/weights were too large and I believe this is pytorch problem and not necessarily a fast.ai problem.

@sariabod Thanks for the reply. If possible, can you share the docker template with me or any article/post you came across that explains deploying the fast.ai model via docker image?

@Skumarr53 This is one from my github page that I use. You can see how I used it to deploy a segmentation project. I didn’t really document it so just let me know if you have any specific questions on any parts.

Thanks, @sariabod for sharing these files. Can you give a rough estimate of space, the instance created out of the docker image would take?

Hi, Do you have something for GAN models?