Using fastai, I trained a ResNet model on my image dataset and got a high classification accuracy.
Now, I would like to use this trained model on new images.
During CNN training, I had scaled the images down to 400x400 pixels by using
data = ImageDataBunch.from_folder(datasetPath, valid_pct=0.2, size=400, bs=32)
Now, for the classification of new test images (inference), I believe I need to rescale them to this size too. However, after loading my trained model with
load_learner, I am not sure how I can define this image transformation when taking the image as input. I have the following code:
my_new_image = open_image("image-1.jpg") learner.predict(my_new_image)
I am wondering if there is a way to do the image rescaling for these methods (similar to the
ImageDataBunch). Otherwise, I am afraid I may get inaccurate results (because the input images are much bigger).
Thank you very much for your help during these trying times!