Regarding the training error reported each epoch

I wanted to know if the training error reported each epoch is the result of running the network on all the training data with the latest parameters achieved at thr end of the epoch? Or it just takes the average error rate of all mini batches in this epoch?