ClassificationInterpretation.from_learner inconsistent

Hi, I am using the very latest fastai, just reinstalled it from github to be sure. ClassificationInterpretation.from_learner returns different results everytime… I am not expecting this behavior because i have TTA explicitly turned off and am specifying the validation set to be sure:
three different calls, one after the other of:

interp = ClassificationInterpretation.from_learner(learner,DatasetType.Valid, tta=False)
interp.plot_confusion_matrix()

produce different results everytime. I am expecting the same result. Am I misunderstanding something or is this a bug?

thank you

I have no inner knowledge of that part of fastai, but to be completely sure that no augmentations are applied to your validation set, can you give us the output of learner.data.valid_ds.tfms.

Ah… thanks that pointed me to my error learner.data.valid_ds.tfms = []
but I am subclassing ImageList to do my own tfms.

class MyImageList(ImageList):
    def open(self, fn:PathOrStr)->Image:
        img = my_read_and_augment(fn)
        return vision.Image(px=pil2tensor(img, np.float32))

def get_data(df_trn, batch_size, image_size, val_idxs, df_tst):
    return (MyImageList.from_df(df_trn,DATA_PATH/'train','id',suffix='.tif')
        .split_by_idx(val_idxs)
        .label_from_df(cols='label')
        .add_test(MyImageList.from_df(df_tst,DATA_PATH/'test','id'))
        .transform(tfms=[[],[]], size=image_size)
        .databunch(bs=batch_size)
        )

So now I have to solve how to tell my_read_and_augment not to augment in this case.

Thank you!