Inconsistent predictions after export to ONNX (Resnet)

I’m exporting a basic Resnet18 with new Head and 3 Classes to ONNX but i get inconsistent predictions compared to fastai/torch. The inputs are 64x64 patches and they are normalized on Imagenet stats. When i print the weights they are different as well. Although 80-90% of the predictions are right…

I’m using fastaiv2; torch==1.7.0; torchvision==0.8.1
For the inference in ONNX I have tried onnxruntime == 1.5.1 and tract which is an inference engine for cross platform models.

This is the call i used to export the model:
torch.onnx.export(my_r18.eval(),dummy_input,“model_onnx.onnx”)

If anybody has an idea i would be really thankful since I’m running out of ideas.

Usually this happens because a part of the model was not converted, typically image normalization.

I would remove everything from the PyTorch model except the first conv layer, convert that to ONNX, and then compare the outputs from the two models on an identical input (such as a tensor containing all ones).

If this already goes wrong, chances are that the mistake is in the data that goes into this first conv layer.

1 Like

good tip sir!

Thanks @machinethink i had a mistake in the inferencing part afterwards tract which i used was doing something weird after i switched back to onnxruntime it worked out fine i just had to apply a softmax on the output.