Use model trained with Fastai in OpenCV

Hi,

I have a simple two classes classification problem. All the input images have a size of 64x84
I have trained a model with fastai :
from fastai.vision.all import *
from fastinference import *
path = ‘train’
fnames = get_image_files(path)
def label_func(x): return x.parent.name

dls = ImageDataLoaders.from_path_func(path, fnames, label_func, valid_pct=0.2)

learn = cnn_learner(dls, resnet34, metrics=error_rate,)
learn.fine_tune(1)

The accuracy is around 96%
Then I am exporting the model to ONNX format with :
torch.onnx.export(learn.model,torch.randn(1, 3, 64, 84),"model.onnx")

Then I want to do inference with OpenCV in C++ on the same images used during training :
cv::dnn::Net model = cv::dnn::readNetFromONNX(“model.onnx”);
cv::Mat image = cv::imread(file);
cv::Mat blob = cv::dnn::blobFromImage(image,1,cv::Size(), cv::Scalar(), true);
model.setInput(blob);
std::vector results = model.forward();

The results are weird (big negative and positive values), and there is no way to reproduce the accuracy obtained with fastai.

I don’t know if I’m doing something wrong when exporting to onnx format or during inference with opencv.

1 Like

You need to recreate your original transforms here that your validation set had. This means you need a center crop resize or at least you need to normalize it based on what was done to your training data (likely imagenet)

1 Like

Thank you for your answer! Is there a way to know which preprocessing is exactly done by :
dls = ImageDataLoaders.from_path_func(path, fnames, label_func, valid_pct=0.2)
learn = cnn_learner(dls, resnet34, metrics=error_rate,)