In my dogbreed notebook, I’m seeing this:
[ 0. 0.27302 0.21015 0.93597]
[ 1. 0.25252 0.2081 0.9306 ]
[ 2. 0.24493 0.20816 0.93551]
[ 3. 0.22577 0.20581 0.93451]
[ 4. 0.22319 0.20776 0.93395]
Between epoch 0. and 1., both the training loss decreased (.273 -> .210) and the validation loss decreased (0.210 -> 0.208), yet the overall accuracy decreased from 0.935 -> 0.930.
I would definitely expect it to increase if both losses are decreasing. Am I misunderstanding something about how the accuracy is calculated, or does this look like a rounding error?