Worse performance when training with variable size input

Hi,

I see the point of training with variable size input so that the model learns patterns that do not depend on the size of the input and, as a consequence, generalizes better. However, when I compare the test set performance of a model that has been trained on input of the same size S with a model that has been trained on variable size input, the former has, in my experiments, a much lower test set error when tested on this size S. While this makes sense intuitively, it is an undesireable result. Although the model trained with variable size input might lead to an lower overall error when tested on the range of possible sizes, we still want a model that is at least as good for a specific size as a model trained with only this size.

I wonder if this is typical finding and how one deals with this? I would be interested in your opinions.

Thanks in advance!