I have trained and exported a model that works with image data , while trying to do inference with following code :

learn = create_cnn(query_ds, models.resnet50, metrics=accuracy)

learn.get_preds(learn.data.train_dl)

```
produces :
[tensor([[0.2794, 0.4179, 0.3028],
[0.2198, 0.4329, 0.3473],
[0.1507, 0.6839, 0.1654],
[0.1670, 0.5620, 0.2710],
[0.2090, 0.4640, 0.3270],
[0.1888, 0.5622, 0.2490],
[0.3274, 0.3213, 0.3513]]), tensor([0, 1, 1, 2, 0, 2, 0])]
When i run the same block again , i get different scores :
[tensor([[0.2329, 0.4419, 0.3252],
[0.2527, 0.4269, 0.3203],
[0.3447, 0.3332, 0.3220],
[0.3269, 0.4221, 0.2510],
[0.3003, 0.3916, 0.3081],
[0.4228, 0.3124, 0.2648],
[0.3449, 0.4089, 0.2462]]), tensor([0, 1, 1, 2, 0, 2, 0])]
```

I have also tried the data block tutorial for digit prediction . Here also the prediction probabilities changes after i load the same model and run inference multiple times . For example ,

learn = load_learner(mnist)

img = data.train_ds[1][0]

img.show()

learn.predict(img)

```
it produces :
(Category 7, tensor(1), tensor([0.0628, 0.9372]))
But i run the same block again it produces :
(Category 7, tensor(1), tensor([0.0170, 0.9830]))
```

Can you please let me know the reason behind this. Ideally during inference the weights should be frozen and hence the vector values should remain same at each run. There is big difference in the final confidence scores. I wanted to use the scores for embedding search but because of this difference in final embeddings for the same image, i am facing issues.

Kindly suggest a way to solve this issue.

Thanks

Blockquote