Conceptualizing Padding in Transposed Convolutions

I know it’s generally advised to use different upsampling techniques, but I’m not entirely sure how padding works when performing Transposed Convolutions…

Is the padding applied to the input or the output? Also, which preserves the most information, more or less padding?

Appreciate the help!