Padding changes output in text classification

I am using fast.ai for text classification.

If I use it in a single message for inference, it always predicts correctly. But if I use the same in batch inference results are not always correct. It throws different results.
I was able to drill down why it behaves like that. It is because of padding.

Example:

Your product is performing badly. Cannot you do that better? Get lost

Tokens list:

'[xxbos', 'your', 'product', 'is', 'performing', 'badly', 'can', 'not', 'you', 'do', 'that', 'better', 'get', 'lost']

For the above text, if I give this as a single input to my PyTorch models, it predicts as Negative(I am having 3 classes).

If I give the same text with one more example results is getting changed.

"Your product is performing badly. Cannot you do that better? Get lost",
    "Your product is performing badly. Cannot you do that better? Get lost Your product is performing badly. Cannot you do that better? Get lost "

Tokens list:

[['xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxpad', 'xxbos', 'your', 'product', 'is', 'performing', 'badly', 'can', 'not', 'you', 'do', 'that', 'better', 'get', 'lost'], ['xxbos', 'your', 'product', 'is', 'performing', 'badly', 'can', 'not', 'you', 'do', 'that', 'better', 'get', 'lost', 'your', 'product', 'is', 'performing', 'badly', 'can', 'not', 'you', 'do', 'that', 'better', 'get', 'lost']]

Could anyone explain how to change this behavior? The only way I could think of to change this is to predict one message at a time. I don’t want to do that as I want to use the power of GPU in inference.

Help is appreciated.

You can batch the texts if they don’t have the same size, so you need padding.

@sgugger Yes, I use batch. But it is changing the output in my case.

 "Your product is performing badly. Cannot you do that better? Get lost",
"Your product is performing badly. Cannot you do that better? Get lost Your product is performing badly. Cannot you do that better? Get lost "

For this example, I got Neutral and Negative as output.

But If I give this for a single prediction I get Negative for both.

@sgugger Any clue on this?