Different training and validation loss for same data

Just to give it a start i used the same set of images (mnist jpgs) as training and validation sets for ImageClassifierData.from_paths

>data = ImageClassifierData.from_paths(PATH, tfms=tfms_from_model(arch, sz),
>                                       trn_name='trainingSet', val_name='trainingSet',
>                                       test_name='testSet')

What i noticed is that validation loss was consistently lower than training loss, but actually these sets were identical. I find this interesting. I would think that because it’s actually the same images used as training and validation sets, that the loss for both should be the same or similar. See below. Does it make sense?

1 Like

This is interesting. Generally “dropout” is used during training and not in validation/testing. This could be a possible reason. I am curious to know what other possible reasons could be there.

Update: This is where Jeremy explains it.

@akschougule
That is great. Thanks for the link to the video!