I’m using fastai v1 and have trained a Resnet18 on my own dataset.
I’ve exported the “export.pkl” file from the data object.
I’ve also exported my “model1” file from learner with learn.save()
Now I want to do prediction on new never seen images.
I think that none of the 2 exported file include the size of the image used during training (288x288 in my case).
As a consequence, when I load new image (initial size of 480x480) for inference, I have to do:
img = open_image(“folder_for_inference/image1.jpg”).resize(288)
It is a nice behaviour in some case to have the possibility to do inference with image of any size, as the resize effect may completely change the structure of the object to classify.
In my case this is not true. Inference on not resized images give significantly worse results.
Of course the resize can be done manually as I do but it is “dangerous” as I usually make several test at different img_size so I could make a mistake.
It would be great if we could have the possibility to trigger during “data.export()” an attribute (set to False by default) that could modify the img_size during inference.
By this way, the expected behavior would be to have an automatic resizing of the images during inference at the exact size used during training.
In any case, thanks a lot for the GREAT WORK you have already done on this library. I’m an early follower of Fastai courses on a remote way from France…