Learn.validate loss != learn.get_preds(dl=dls.valid, with_loss=True) loss.mean()

Is this because learn.validate() calculates the loss by batch … whereas learn.get_preds returns the actual loss for each example?

Maybe this will help…

The batch explanation doesn’t apply because of the mathematics of mean. The mean of each batch is multiplied by that batch size, totaled, and divided by the total number of samples. That equals the mean over all samples. (It would apply if your loss fn has a non-linearity for its outer step.)

Makes sense what you are saying about loss calculation … but the referenced post didn’t really answer my question as I’m using the validation set for both:

learn.validate() 
# loss = 0.40313586592674255

probs, targs, loss = learn.get_preds(dl=dls.valid, with_loss=True)
# loss = 0.4830322265625

Why is the loss different from one approach to the other?

Sorry, I misread your question.

When I test with drop_last=True for the validation DataLoader, the losses are different. With drop_last=False, the losses are the same. It looks like get_preds drops the last partial batch, but validate does not.

Tested with learn.validate(1)[0] vs. learn.get_preds(1, with_loss=True)[2].mean().

Or else you are using an unusual loss function whose mean can’t distribute via batches.

That must be it as I’ve never noticed this behavior before. This is happening with a custom loss function for a multi-modal model (has a binary classification task and a regression task) where the respective losses are combined.