Different outputs on forward pass

Dear all,

Good Day!

I have trained a resnet50 model by freezing all but the final layer (I have also frozen the batchnorm layers of the imagenet backbone during training). Then I save the state_dict of simply the head and during inference I am loading it in the head part manually.

However, when I do a forward pass on the same image, every time I get a different output; I tried to look everywhere in the forums and in the class lectures but couldn’t figure anything out. Can someone please point out where I might be going wrong?

Thanks & Regards,
Vinayak Nayak.

1 Like

Fastai’s Resize has some randomness involved, so you’re not going to get the same image all the time and thus the difference in the logits. Were your model trained, you would get more or less the same predictions even if the model was fed different parts of the image, but your model’s body has random weights.

Try img.resize (although it results in a lower quality because it doesn’t maintain the aspect ratio), or seed PyTorch, NumPy, etc. for reproducibility.

Good luck!

Only when split_idx is 0. When it’s 1 it will always center crop. I think this should work:

Resize(224, split_idx=1)

Otherwise do:
r = Resize(224)
r.pcts = (0.5,0.5)
r(im)

EDIT: it’s Resize(225)(im, split_idx=1)

4 Likes

Thank you! I would always use seeds for reproducibility, didn’t know about split_idx. Much appreciated!

2 Likes

Thanks @muellerzr, that worked well.

2 Likes

Thanks Bob,

img.resize was a good suggestion, I also found the split_idx suggestion below useful :slight_smile:

3 Likes