3 things really, from lesson 9 v3 course. Part about calculating accuracy / loss for the validation set.
- In
nv = len(valid_dl)
what does nv stand for? - How is it that the mini batch sizes are different? As far as I can see, only the very last minibatch will have a different size. Is that what we are trying to account for?
- Why do we double the batch size in the dataloader for validation as opposed to training. Just for speed?