Why accuracy is getting drop during deployment of fastai model

I’m using fastai model for the deployment of my Image classifier. I use the same dataset to check the performance of the model. Here I’m using a cascade approach for means using 2-3 models in a cascade manner. If I check the performance of the individual model the accuracy is pretty good. But when I run all these in a cascade manner during deployment I get different accuracy means a drop in accuracy. Why does this happen? I’m using pickles for the inference of the model during deployment. To make pickles file I use to learn.export() method of fastai.

1 Like

Does the cascade of models before deployment look higher in accuracy? Are you positive they’re all in .eval() before you’re checking upon their accuracy before deployment?

Also, are you sure the transforms are all set up the same?

I use this method for inference

learn = load_learner(“pickel file path”)
pred = learn.predict(open_Image(image_path))
Whether I have to do this after load_learner method
learn.model.eval() before prediction or can I directly do the prediction without using eval() method in fastai?

You can directly use the predict method. (Just make sure your ensemble isn’t expecting the full raw values. As they’re softmax’d out of predict)

Then why I get accuracy difference during cascade inference as compared to individual model inference?

I’m unfamiliar to how cascading works overall (haven’t played with it myself), however also note a fastai model is just a torch model and fastai is the training framework. So I’d research on the PyTorch forums/Kaggle if something like this has ever been observed :slight_smile:

(To my Knowledge I don’t know if anyone doing cascading from a fastai perspective)

Okay, thanks for your reply.