What is exactly being done to the Image before prediction in a learner?

Hi!

Here’s my situation. I am given a task, an image classification. i collected the data, i labelled them, and then i used fastai to build a model and i am now really happy with the result :smiley:

But heres the catch… i need to deploy it using an inference optimized framework! meaning i cannot just serve the entire fastai model exposing the .predict method of the learner!

So here is what i happened so far:

  1. I exported the fastai model
  2. Now the model is on raw pytorch, and i can load and make prediction with it. It is already on eval mode.
  3. I used opencv resizing and divide by 255 as preprocessing. I load an image using imread, convert to RGB, and then i do preprocessing using aforementioned method.
  4. I ran the model on the preprocessed image.
  5. I raised an eyebrow, and for the same image, i called the learner .predict method, supplying it the path of the image.
  6. And it is now at the point i am currently at. The softmax output is different between using my preprocessing and manually calling inference on the pytorch model and the result i get from calling .predict on the path of the image

What am i missing here?

Did you apply the fastai augmentation transforms (validation only) which you used while training? Not sure if your opencv transforms are equivalent to these.

That might be what I’m missing. From what i understand, for validation mode, the augmentation would just apply resizing and center cropping right? what about the normalization? I already tried just dividing by 255 and i also tried using imagenet mean, both doesnt seem to work :confused:

I have tried reading the code but it’s pretty tough to navigate…

Fastai uses Pillow, not OpenCV, for image reading and initial processing. In general, you want to use the same package for preprocessing images as the model was trained on.

Pillow uses RGB format and OpenCV uses BGR format. There are also some differences between the two packages with interpolation implementations (and “bugs”) to watch out for too.

For augmentations, I believe only item_tfms apply during validation. You’ll have to look at the docs to verify what the augmentation you use does during validation.

And as you mentioned, you need to recreate Normalization and IntToFloatTensor.

Thank you, i managed to make it work using the following:

  • RGB transformation
  • resize with center crop
  • divide by 255
  • normalize using imagenet mean

There are certain differences but it’s pretty minor now

cheers guys

Hello there. I would like to ask you some help because I do not get it here.
I am dealing with a segmentation problem.
Indeed FastAI uses Pillow for reading color images that are codified with ‘L’, meaning with a color between 0 and 255.
The images are read correctly with the mask.
I build the dataloadet, the dls and then the learner.
But if I try
learn.lr_find()
it stops with the error:
IndexError: Target 199 is out of bounds.
Where 199 is one of the color codes from the ‘L’, and therefore in the masks, but it does not seem to find it. Does anybody know where I could put it in ???

Are you saying that 199 is in your masks? Mask values must be continuous in Fast.ai, ranging from 0 to N-1 with N being the number of classes. I would guess you have to reprocess your masks.

Hoi Jürgen,
Yes, indeed. Just after I asked for help, I understood the origin of the problem. Indeed I need to reprocess all masks.
May I ask you where you found these information on the algorithm? In the end, last Friday evening I came to the same conclusion, but the mount of time for guessing and back-engineering is considerable. Do you know if there is a tutorial/lesson on how the databunch has to be built?
Anyhow, if nothing comes to mind, do not worry and thank you very much to confirm with a circumstantiated explanation how FastAi ingests the images.
Best regards,
Andrea

Hi Andrea,
I had stumbled into the same issue a year ago :slight_smile:
What helped me were the tutorials of Zachary Mueller at walkwithfastai.com. For example https://walkwithfastai.com/Segmentation.