I cannot quite figure out how to set up images to create an IMageIMageList. Where does it find the X image and the Y image, and how does it associate the right X to the right Y?
I’m following a tutorial about image to image translation and trying to transform the tutorial code using fastai library.
My data comes in the form of a domino-like jpeg, on the left I have the original image (a satellite image) and on the right I have the “Y” (the map image). I need to cut it in 2 and set the x as left part and the right as Y. In the tutorial they do it “manually” but I’m keen to implement it using fastai library.
Would someone be able to point me in the right direction? How do I instantiate that IMageIMageList? Ideally also an example of where something similar is done?
So, I looked a little around for examples. The tests do nothing, just a lambda identity function. Other places, I see paths given to files using
label_from_func. Its a little old, but I’m looking at block 9 from here
Personally, I don’t see another way except for pre data block setup, splitting your images in half, and saving the preds into another file path. That way, you can use
label_from_func and derive the pred file path from the train file path. I’m not sure if you need to pass the argument
label_from_func. It should be able to derive that by virtue of your label being a fully fledged path.
If this is your definition of “manually” then I’m not sure what else you were looking for. Even if you were to, say, open the image in
label_from_func using the file path, split it in half, and return the numpy array / tensor as the label for the image, I’m not sure that would work. It’s also sounds a more complicated than it needs to be.
Hope that helps.
Hi! Yes that helped!
I ended up making more sense of what you were saying by debugging the lesson7-superres notebook with vscode, and stepping into the fastai library. And yes this is where the magic of the ImageImageList is happening, in the “label_from_func” lamba …
Yes i think the easiest option is to do some preprocessing of the images first and save them in 2 different folders.
Alternatively, I would need to code something in the fastai library, like a special ImageList that actually gets the left side of the image but it’s going to end up being more complicated.
“Manually” wasn’t really clear indeed - what I meant is that they were dealing with the input data not using fastai, just plain numpy array and a bit of pytorch.
I’m just “translating” into fastai as an exercise of understanding how everything works.
Thanks for taking the time to answer!