I have noticed a very strange behavior in my model. When I print the probabilities for photos from the validation set using :
plot_val_with_title(rand_by_correct(True), "Correctly classified")
I get different probabilities, than when I score the exactly same pictures myself, using the following code:
image_path = '/home/ubuntu/notebooks/photos/valid/group1/979794_3.jpg'
trn_tfms, val_tfms = tfms_from_model(arch,sz,aug_tfms=transforms_side_on, max_zoom=1) # get transformations
image = open_image(image_path)
im = val_tfms(image)
learn.precompute=False # We'll pass in a raw image, not activations
log_pred = learn.predict_array(im[None])
pred = np.argmax(prob, axis=1)
For example one probability is equal to 0.9, and the second 0.99, or 0.74 vs 0.57.
Its all happening in one notebook, on the same model instance and using the same settings.
I’ll be very grateful for some feedback as this is totally unexpected behavior and leads to incorrect probabilities for new (out of the validation/train sets) pictures.