One reason I can think of is that accuracy breaks results down to a binary decision based on your accuracy threshold. Loss on the other hand reflects how confident the model is in its decisions.
Consider binary classification on two examples with ground truth values (0, 1). One model classifies the examples (0, 1) with probabilities (0.4, 0.6). The other classifies the examples (0, 1) with probabilities (0.02, 0.98). These models would have the same accuracy based on a threshold of 0.5, but the second model would have a much lower loss.