Loss Calculation using Mini Batches

Hi All,

The validation loss is calculated as the average loss over all the mini batches. Code for reference is :https://github.com/fastai/fastai/blob/master/fastai/basic_train.py#L49

My questions are:

  1. Do we follow the same for train loss i.e. average all the losses over minibatches? This is required for reporting after every epoch
  2. Why don’t we calculate the loss on the entire validation data(predict on the whole validation data and get the loss) at the end of every epoch?


  1. Yes. This has been a standard convention among practitioners. Two relevant links (https://stackoverflow.com/questions/54053868/how-do-i-get-a-loss-per-epoch-and-not-per-batch)
  2. As long as your batches are of same size (i.e. last batch is not clipped/smaller size), this will yield the same result. You can do the simple calculation to see why. Note that it is NOT true for training loss calculation as we update network parameters for every mini-batch.

Hope it helps.