I have been digging everywhere and I could not find a way to reverse a transformation such as a padding. So I wonder if there is any particular reason for this ?
To illustrate why I think it would be useful, here is why I need it in my use case. Basically I have any image going through 2 models. The first one output an area of interest (bbox) and the second one performs multiple object detection. So I need to crop the original image to the size of the predicted area of interest before passing it to the second model.
The problem is that the predicted bbox is computed on top of a transformed image (size, padding, eventually tilting, etc).
It is possible to go around the padding problem because there is no random transformation, and it is basically a matter of adjusting the coordinates in function of the original image/transformed image size ratio. But in other cases such as tilting, as we dont store the parameters of the random transformation, it seems impossible to reconstruct the bbox coordinates in the referential of the original image.
Again, my apologies if I missed anything. My understanding of what should or should not do the lib is limited and I hope it will help me in that regard.