Input image size for class prediction


Using fastai, I trained a ResNet model on my image dataset and got a high classification accuracy.
Now, I would like to use this trained model on new images.

During CNN training, I had scaled the images down to 400x400 pixels by using
data = ImageDataBunch.from_folder(datasetPath, valid_pct=0.2, size=400, bs=32)

Now, for the classification of new test images (inference), I believe I need to rescale them to this size too. However, after loading my trained model with load_learner, I am not sure how I can define this image transformation when taking the image as input. I have the following code:

my_new_image = open_image("image-1.jpg")

I am wondering if there is a way to do the image rescaling for these methods (similar to the size=400 in ImageDataBunch). Otherwise, I am afraid I may get inaccurate results (because the input images are much bigger).

Thank you very much for your help during these trying times! :slight_smile:


The way I did it was:

tfms = get_transforms(do_flip=False, p_affine=0, p_lighting=0)[0]
img = open_image('temp.png')
img = img.apply_tfms(tfms=tfms, size=(224, 336), resize_method=3
prediction = learn.predict(img)

Iā€™m just starting out as well so anyone can correct me if I messed up.

1 Like