am experiencing mismatch in classification / prediction output of trained FastAI model after conversion to CoreML (via ONNX as interim step) on same test data.
Following a safari in South Africa during which we were lucky enough to have two leopard sightings, we struggled to identify gender of the animal. Following the pets example, trained a resnet34 based learner based on 100 downloaded images of female and male leopards each.
error rate after training is about 10%.
As I want to use model in iOS Swift based app, converted to Onnx then to CoreML.
import torch.nn as nn
import numpy as np
from onnx_coreml import convert
from torch.autograd import Variable
model_name = “model_leopardGender”
learn.precompute = False
model = torch.save(learn.model, model_name + ‘.h5’)
dummy_input = Variable(torch.rand(1,3,224,224, device=‘cpu’))
torch.onnx.export(learn.model,dummy_input,model_name, input_names=[‘image’], output_names=[‘gender’], verbose=False)
mlmodel = convert(onnx.load(model_name), image_input_names = [‘image’], mode=‘classifier’, class_labels=[‘female’,‘male’])
mlmodel.author = ‘tpeter’
mlmodel.license = ‘MIT’
mlmodel.short_description = ‘This model takes a picture of a leopard and predicts its gender’
mlmodel.input_description[‘image’] = ‘Image of a leopard’
mlmodel.output_description[‘gender’] = ‘Confidence and label of predicted gender’
mlmodel.output_description[‘classLabel’] = ‘Label of predicted gender’
converts without error.
Then used Apples CoreML example app (https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml?changes=_8) and replaced default coreml model with the above created.
Classification as well as confidence differs though vastly between fast.ai learner.predict and using the coreML model. (coreML model predictions are frequently way off)
Anybody experiencing the same? Any ideas on root cause? e.g. any pre-processing that needs to be done on image in swift to match?