I’m exporting a basic Resnet18 with new Head and 3 Classes to ONNX but i get inconsistent predictions compared to fastai/torch. The inputs are 64x64 patches and they are normalized on Imagenet stats. When i print the weights they are different as well. Although 80-90% of the predictions are right…
I’m using fastaiv2; torch==1.7.0; torchvision==0.8.1
For the inference in ONNX I have tried onnxruntime == 1.5.1 and tract which is an inference engine for cross platform models.
This is the call i used to export the model:
torch.onnx.export(my_r18.eval(),dummy_input,“model_onnx.onnx”)
If anybody has an idea i would be really thankful since I’m running out of ideas.