Accuracy metrics reflects training dataset

Hi, I have noticed while used with tabular, that accuracy keep improving while training and validation losses diverge, meaning training loss continues to improve, while validation loss is getting worse (overfitting). Does it mean that accuracy reflects training set? Isn’t it weird then?

No, all metrics are from the validation dataset