Increasing batch size to reduce training time

I am training a language model of Part 1 lesson4, so when I am increasing batch size, the training time should reduce but actually it’s not happening. I have tried to run on google colab with batch size 50 takes 40 mins per epoch and with batch size 100 takes 40 mins also.
First, I thought the loading time of data should have increased on increasing batch size as colab provide shared CPUs so I tried on my local system but again the training time on increasing batch size is same.
I would like to know why it’s happening?