Loss and Accuracy on Validation set is Avg per batch?

In the fit function:

def fit(epochs, model, loss_func, opt, train_dl, valid_dl):
    for epoch in range(epochs):
        # Handle batchnorm / dropout
        model.train()
        print(model.training)
        for xb,yb in train_dl:
            loss = loss_func(model(xb), yb)
            loss.backward()
            opt.step()
            opt.zero_grad()

        model.eval()
        print(model.training)
        with torch.no_grad(): #Gradiant calculations are off
            tot_loss,tot_acc = 0.,0.
            for xb,yb in valid_dl: 
                pred = model(xb)
                tot_loss += loss_func(pred, yb)
                tot_acc  += accuracy (pred,yb)
        nv = len(valid_dl)
        print(epoch, tot_loss/nv, tot_acc/nv)
    return tot_loss/nv, tot_acc/nv

I can see that tot_loss/nv and tot_acc/nv are avg value for each batch given by valid_dl.
Is this correct behavior? Also, why are we returning last batch of last epoch’s avg total loss and avg total accuracy ?
Also, why are we not print training set loss? Should we be calculating that also avg per batch in train_dl?

Any thoughts?