Is there any point in making pooling layers that DON'T downample?

I know one can padd in convolutions such that the size of the input never changes. I assume that is also possible for pooling. Does doing that have any point?

I’ve always learned that Conv followed by Pool is the way to have equivariance to translation (some invariance to translation). Is that the only reason to force a pooling layer even if its not being down sampling?