I never worked on an image-to-image task. But since your goal is to generate “dehazed” images from “hazy” images, it seems to me that you should be using ImageBlock both for your inputs and outputs. Also, in order to make debugging a bit easier, I would remove everything non-essential (splitter, tfms) and add them once the simplified DataBlock is working.
Yes, you need a model architecture that can generate a full-sized image. Unet is one option, but you could also have a look at the fastai implementation of GANs.
I think Stefan is right, you should use two ImageBlocks, since both your input and output are images. Another thing to change is the get_y function, you don’t want a label (which label_func probably returns), but a filename or path to your dehazed image.
As for the model architecture, you can try a UNet, there is a convenient fastai function called unet_learner that dynamically creates a learner with a UNet from pretrained models
Actually yeah! I removed the get_y label function, turns out to be that was causing me the error. After removing it , I was able to print them but I don’t have a clue why the images are kinda repetitive