Accuracy Slowly Increases while Validation Loss Almost Unchanged

I am training on a large volume of images (training size 560000+) with resnet34. I firstly trained the last few FC layers with precomputing, then trained them with data augmentation, and finally unfreezed them all for discriminative training. I applied cosine annealing for all training (8 epochs per cycle)

When I trained last few FC layers with precomputing, the trend of accuracy, val loss and train loss looks good. Each epoch gave a great improvement on all metrics.

When I trained last few FC layers with data augmentation, the trend of metrics also looks good. As attached below:






When I am training with discriminative learning for all layers, There is no problem for the first few cycles of training - the trend of validation loss kept decreasing while that of accuracy keeps increasing.







However, something weird happened in the subsequent cycles of training, the validation loss fluctuated at almost the same level, but oddly the trend of accuracy still slowly increase. In particular, the validation loss increases at the end of each cycle. (as shown below)




I am not sure how I should interpret the phenomenon.(Is it overfitting or a sign of improvement) Did anyone came across similar phenomenon? I am still training the model with discriminative learning rate. Should I continue the training?