PyTorch models in production using c++

Hello All,

I’m playing with fastai and PyTorch for last few weeks, and got few models for image classification that i want to use in our software. The problem is that i can’t find any resources on how to do it in plain C++.

The point is: i can’t use any python for production, only C++, and i have no internet connection so Google/AWS or any other is forbidden.

Till now i tried few ways:
Convert “model.pt” models to caffe2 models - no luck
Implement libTorch (C++ PyTorch) - no luck, because of lack in documentation and any tutorials.

Does anyone has some ideas?

Thanks in Advance,
Alexey

why caffe2 did’nt work? you can easily convert “model.pth” into onnx format. For more details look at this this tutorial

Thanks for your reply msrdinesh

In C++ i’m using dnn module from OpenCV, it can read caffe2 models using *.prototxt and *.caffemodel files.
I need to convert .pth to caffe model, i mean that i need to save PyTorch model as Caffe model.

When i convert “model.pth” in to onnx format, i can load it using onnx backend, but i can’t save it in Caffe format.

Any thoughts?

Hey check this link.I think it will be helpful to you.

1 Like

hi @alexeykh

I have an imperfect solution that might solve your problem or give you some ideas.

We worked on an open source framework (github.com/bentoml/bentoml) for packaging, shipping, running ML services.

BentoML helps you packaged your model into an archive in your local file system or cloud storage with all of the dependencies, pre processing code and configurations.

One way to use bentoml is use the CLI tool. Your production code can call

$ bentoml predict /PATH/TO/ARCHIVE --input=INPUT_DATA

You can also install the archive, and then use the built-in CLI tool:

# Assume your BentoML archive's name is IrisClassifier
$ IrisClassifier predict --input=INPUT_DATA

These CLI tool still use python under the hood, so this might be a deal breaker for you.
Our fastai support is still in PR. It should be merged and released this week.

Let me know if this helps you in anyway. I also love to learn more about your use cases and its context, so we can have those in mind when we design and explore new features in the future

Cheers

Bo

Hi! @msrdinesh
I would also like to implement a Fastai model in QT with c ++, only I don’t know where to start, from what I’m reading I have to convert my (.pth model), for caffe2 format … But after I don’t know what else to do… you know any guide to do it? What about my .pkl model I can’t use it? thank you so much

Hey, I didn’t have experience working on deep learning with c++. In the link which I have posted earlier, there are few examples to work. Basically we just need to convert trained .pth file to ONNX format. Then ONNX format to Caffe -2 format and then work on it. That’s pretty much it.

Hi Yessica,

Well it is a little bit of problem :slight_smile:
The problem is that almost all AI frameworks does not really have appropriate C++ API.
The way i’m doing it for FastAI is using OpenVino opencv
1.train your model using FastAI
2.export your model to .onnx format
3.convert onnx to OpenVino format using OpenVino model optimizer
4.use your converted model in c++ using OpenVino opencv

of course it only works on x86 cpu

Good luck!

1 Like