Accuracy mismatch on export ONNX

I have been trying to match accuracy for models converted to ONNX with Fastai model. I not able to reproduce same number. There problems is difference in preprocessing done by Fastai and what I have been doing for ONNX input.

For FastAI v2, I am doing the following for simple 2 class image classification task, :

  • Image read, open and covert to RGB Image.open().covert(‘RGB’)
  • Image resize to 224 keeping media ratio same, with resample mode 2
  • Image normalized by dividing 255

I want to replicate default preprocessing done by Fastai without using fastai in production.

Did you train by normalizing on ImageNet? This may be where your mismatch is. You need to normalize by those statistics

No, I assumed it was trained on imagenet stats by default, but that was not the case.

I was re-encoded the tensors received by one_batch to see where i am going wrong, i was able to reproduce images by just multiplying tensor by 255.

How did you build your DataBlock/DataLoaders?

I think i used most simple variant among all the option that are there :smile:

dls = ImageDataLoaders.from_df(df.loc[:,['frname','label','is_valid']],
                                  path=imagepath, valid_col='is_valid',
                           label_col='label', seed=42, item_tfms=Resize(224))

I am using following loop for inference from ONNX:

mn=np.asarray([0.485, 0.456, 0.406]) 
std =np.asarray([0.229, 0.224, 0.225])

for each in tqdm(range(0,4080,4)):
    a=[]
    for each1  in jpgs[each:each+4]:
        image_read_resize=PIL.Image.open(each1.as_posix()).convert('RGB')#,(224,224))
        image_read_resize=image_read_resize.resize((287,224),resample=2) #maintain aspect to original image
        
        image_rgb_224=np.array(image_read_resize)[:,32:-31,:] #convert to 224x224
        image_move_axis=np.moveaxis(image_rgb_224,2,0)
        image_norm=image_move_axis.astype(np.float32)/255
        a.append(image_norm)
    
    gets=ort_sess.run(['491'],{'input.1':a})

Are you using cnn_learner? If so it adds normalizing from imagenet via this line:

You can further check your stats via dls.after_batch.normalize.mean, std (IIRC)

1 Like

I wanted to use Mobilenet, but since it is not available in Fastai2, I am using Learner:

learn = Learner(dls,model,metrics=[error_rate,accuracy])
1 Like

I am suspecting the there is difference between my resizing method v/s fastai resize procedure during batch creation.

I am not able find where image open and resize procedure is called in the pipeline.

Because, I have verified reverve transformation for normilization with np.allclose

The first is in the type transforms, the second is after_item. (It’s just PILImage.create)

But it is just inheriting PILBase class and calling load image

which is just opening image and converting mode

Resize(224) in transform is dataloader is calling crop_pad

I think this function is reponsible for resize

I was able to reproduce similar results by using:

  • Resize using PIL, using cv2 doesn’t reproduce same results
  • During resize maintain aspect ration and then crop longer dimension according, I was not sure why this step increased the similarity in reproduction

Still I was not able to reproduce exact matrices:

  • There are difference in the matrices we get from PIL and CV2 image read (though difference is very small, but not returning True in 2 image tensor comparisson), in pipeline i am getting matrices in production from a different source, hence cannot actually change this step.