Inconsistency in prediction when use batch to predict from pytorch models

I built a model using fast.ai for sentiment classification. When I predict one sentence at a time it gives decent accuracy. I tried to utilize my GPU power, So I changed from single input to one batch(128 sentence) at a time. As I padded small sentences output is not as excepted.

I googled for this and came up on PackedSequence for pytorch… I tried to use that.
I failed on that.

If someone knows solution for this or already faced this issue, shed some light on this.

Thanks