How to reverse transforms?

I got the follwing code:

img_path = f'{PATH}{data.trn_ds.fnames[12]}'
trn_tfms,val_tfms = tfms_from_model(model,64)
test_img = open_image(img_path)
im = val_tfms(test_img)

I have tried look in the fastai code, but I can’t finger out how to denorm the data(like the dataset where we have the denorm function).

can someone explain to me how to do it?

I am in lesson 3 so maybe its something that will be explained in later lessons, if that the case then let me know.

Your val_tfms have the same denorm method that you’re referring to. So val_tfms.denorm(im) will give you what you want.

It does’nt working, I am getting the following error:

Like the denorm you were talking about, you need to put the channels in the right order before applying it: your (3,256,256) array needs to be (256,256,3).
For this you can use the command transpose, with the argument (1,2,0) (because you want the axis 1 in first position, then the axis 2, and at the end the axis 0)

2 Likes

It works, :fireworks:
Thank you!

but can you explain me how did you know it expects (256,256,3) array? :thinking:

Your error message :wink:
So it’s a broadcasting error (if you don’t know about broadcasting yet you will have to learn because it’s super useful). The basic is that to do the operation with an array of size 3, the array you pass should have 3 as its last dimension, so (3,256,256) can’t work, but (256,256,3) does.

Then I remembered the normalization is done before exchanging the channels: a regular image has dimensions in that order (height,width,channels) but pytorch expects (channels,height,width) for its own optimization purposes. When you apply denorm it reverts the normalization, but it doesn’t change the channels back to what they were, so you have to do it on your own.

1 Like