Advice on using nn in a c++ application

Hey everyone, I need a bit of a general advice. I’m working on a sign language translation application that is being written in C++ and is using the qt framework. I need to train an image classifier and am not sure how to approach it. There are two options for me:

  1. Use fastai (library that I am familiar with) and try to convert that model to a format that can be used in C++. If any of you have done this I’d appreciate some pointers on how to achieve this. Is there a well-tested way to do this?
  2. Use OpenCV (library that I am not familiar with) and try to do the training and inference in C++. This option is a bit overwhelming for me considering I have a deadline.
    If any of you has done something similar, I’d appreciate your advice here.
1 Like


Model deployment with C++ is a fairly standard task, and the first option you have described is the more conventional and straightforward of the two. Assuming you have a trained PyTorch model, there are two routes you can take to perform inference in C++, outlined below.

I) TorchScript: TorchScript enables you to serialize PyTorch models and execute them in other environments such as C++. The official guide walks the reader through this process.
II) ONNX + ONNX Runtime: ONNX is a framework for converting machine learning models from most packages - TensorFlow, PyTorch, etc. - into a common intermediate representation (IR) graph, i.e., the ONNX format. ONNX Runtime provides APIs for running ONNX models in various languages, including C++. PyTorch has a tutorial on exporting a model using ONNX, and here is a repository of C++ sample applications demonstrating ONNX Runtime.

The latter is leaner and tends to be more efficient, whereas the former is simpler and can be more flexible depending on the network architecture. In short, if you are seeking maximal performance and runtime optimizations, ONNX + ONNX Runtime would be the more judicious choice. Otherwise, TorchScript is an excellent alternative.

P.S: fastai may be pre-processing your data during training, e.g., normalization. It’s important that such steps be included during deployment as well.

Please don’t hesitate to reach out if you have further questions.


Thanks for the response @BobMcDear!

The only thing I am worried about is the fact that fastai is a high-level API so there might be steps that are need for converting a learner that are not very well documented. I couldn’t find a fastai specific example. But I guess I’ll just start with the links you sent.

Thanks again, man! You are very helpful.

1 Like

You’re very welcome, I am glad you found my response helpful. Bear in mind that you are not converting the fastai Learner - you need to extract the underlying PyTorch model using learn.model, and you can subsequently follow the instructions I have linked. However, data pre-processing conducted by fastai must not be ignored; in the case of image classification, that generally entails resizing, cropping, and normalization, which should be included during deployment as well.

1 Like