Just to give it a start i used the same set of images (mnist jpgs) as training and validation sets for
>data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz), > trn_name='trainingSet', val_name='trainingSet', > test_name='testSet')
What i noticed is that validation loss was consistently lower than training loss, but actually these sets were identical. I find this interesting. I would think that because it’s actually the same images used as training and validation sets, that the loss for both should be the same or similar. See below. Does it make sense?