am experiencing mismatch in classification / prediction output of trained FastAI model after conversion to CoreML (via ONNX as interim step) on same test data.
Following a safari in South Africa during which we were lucky enough to have two leopard sightings, we struggled to identify gender of the animal. Following the pets example, trained a resnet34 based learner based on 100 downloaded images of female and male leopards each.
error rate after training is about 10%.
As I want to use model in iOS Swift based app, converted to Onnx then to CoreML.
Conversion:
import torch
import torch.nn as nn
import numpy as np
from onnx_coreml import convert
from torch.autograd import Variable
import torch.onnx
import torchvision
import onnx
model_name = âmodel_leopardGenderâ
learn.precompute = False
learn.model.cpu()
model = torch.save(learn.model, model_name + â.h5â)
dummy_input = Variable(torch.rand(1,3,224,224, device=âcpuâ))
torch.onnx.export(learn.model,dummy_input,model_name, input_names=[âimageâ], output_names=[âgenderâ], verbose=False)
mlmodel = convert(onnx.load(model_name), image_input_names = [âimageâ], mode=âclassifierâ, class_labels=[âfemaleâ,âmaleâ])
mlmodel.author = âtpeterâ
mlmodel.license = âMITâ
mlmodel.short_description = âThis model takes a picture of a leopard and predicts its genderâ
mlmodel.input_description[âimageâ] = âImage of a leopardâ
mlmodel.output_description[âgenderâ] = âConfidence and label of predicted genderâ
mlmodel.output_description[âclassLabelâ] = âLabel of predicted genderâ
mlmodel.save(fâ{model_name}.mlmodelâ)
converts without error.
Then used Apples CoreML example app (https://developer.apple.com/documentation/vision/classifying_images_with_vision_and_core_ml?changes=_8) and replaced default coreml model with the above created.
Classification as well as confidence differs though vastly between fast.ai learner.predict and using the coreML model. (coreML model predictions are frequently way off)
Anybody experiencing the same? Any ideas on root cause? e.g. any pre-processing that needs to be done on image in swift to match?