I have trained a U-Net on image chip of 250x250 pixels, nom I’m trying to make a large image (satellite) go through the model but I get a 250x250 result. I understand why i get this result, but before writing code to split my images, i was wondering if anyone had a simpler solution than working with numpy array?
I faced similar problem with binary segmentation model once which I wanted to use to crop original images in the dataset. So I trained the model on 4 times small images. The predicted masks came out to be of the same 4x small dimensions. I needed these mask to crop the original images which is huge, not the smaller version used to train the model.
What worked for me was to resize the predicted mask using the dimensions of the original image. This might work for you too.