Constant accuracy with decreasing loss

I am fairly new to Cross Validated section so I apologize if my question structure is incorrect. I am currently working on Fully Convolutional Networks for Semantic Segmentation.

I am first trying to build FCN-32 model from this paper. So for this, I am using VGG16 pre-trained model with feature extraction layer as fixed i.e. freezing the feature extraction layer. After freezing these layers I replaced the classifier layers with upsampling layers i.e. Transposed convolution so as to obtain which has the same dimensions as that of the input.

Paper suggests that we should train our network for 175 epochs or greater with SGD with momentum=0.9 learning_rate = 1e-4 with weight_decay = 5^(-4).

After settings, these exact parameters my pixel-wise accuracy and mean intersection over union(IOU) is remaining constant at 69% and 0.138 or 13.8% but loss decreases very slowly.

According to my knowledge, Accuracy should increase if the loss is decreasing right? What might be the reason for this constant accuracy and decreasing loss? Am I doing something wrong over here?