I want to apply a custom transform for computer vision.
My transform runs on the GPU on a batch, and I want it to be the LAST transform that gets applied. But I want it to be applied even after the TensorImage has been converted into just a tensor (with shape say (64,3,128,128) if images are 128x128 and I have a batch size of 64). I noticed the TensorImage has shape (64,128,128,3) and is of type byte, instead of what I want, which is why I can’t apply my tfm now.
Can I do this?
I can work around this by wrapping my model into another model (and making the tfm a part of the model), but then I don’t get a decode in order to use show_batch correctly
EDIT: Also, it seems my images are getting normalized without my permission. Why, though? That is, dls.one_batch() returns an image with range -2.6 to 2.6 more or less. But I NEVER asked it to normalize. More importantly, how can I stop this?