Intuition about approach to a contest

Hello,

I was wondering if some more experienced people could comment on building a fast.ai version of this notebook?

The dataset has 57 satellite tiles. Each one of those tiles has 6 ‘images’ for different months. There is a mask that shows the field boundaries.

I have gone through and built a really simple fast.ai version that uses each month/tile separately to train and then I used the first month of each tile in the test set to generate a prediction.

I would like to get to the point they are in the notebook where they are using all 6 months of a tile input to predict the mask. (and to use augmentations).

There were 2 approaches I had thought to try and have gotten stuck on both of them. I’m wondering if either one is viable.

The first was to create a DataBlock with 6 input TransformBlocks and the 1 MaskBlock. Each input block takes in 1 months image for that tile (1 = March, 2=April, etc)

The other was to build a fastuple class and a transform, the transform class builds the fastuple class with 6 TensorImages similar to what is done in the video series tutorial. I’m not sure how to get this to work with a mask though.

I’m thinking then when I go to test, then the I can send in all the month images for a tile and it should be good to go. Then I can work on different ensemble methods. Or maybe just using ensembles is easier? For example, build 6 different learners, 1 for each month and then ??? to create the predicted mask. I’m also not sure how I would combine the predictions from different learners to create a final mask.

My main ask is how to recreate that notebook. The secondary ask is what are other approaches that might be easier.

I really appreciate everyone’s thoughts.

Two approaches may work - which one is better I have no idea.

for tiling to mask you can check:

there are also treads about satellite imaging and kaggle notebooks.
sequence of images to mask you can check:

2 Likes

Hi,

I have come up with this notebook. (Haven’t checked if it runs on kaggle as I am using paperspace.)

I am getting results around 1/2 of the result from the baseline I based it off. (Baseline: .20, My highwater mark: 0.11) I was trying to convert from tensorflow notebook here to Fast.ai/Pytorch.

I’ve spent the last two weeks trying to get to the baseline results. I need some expert guidance on what I am doing that is significantly different. The only issue I can see is that I am doing a dynamic augmentation instead of doing one before training but in theory, it should be the same.

Is the loss function wrong - on the wrong axis? I don’t know anymore. Thanks for taking a look.

Admittedly I did not read your entire notebook, but are you feeding square shaped images into the Unet while your input seems to be a concatenation of images i.e. rectangular? Could cause information loss. But again, I might just misinterpreted the parts of the code I looked at.

Hi Jurgen,

From reading the original notebook, they took (256, 256, 4) shaped images and stacked them together to make (256, 256, 24) shaped input to the Unet. 6 images * 4 layers each = 24. I think that is what I am doing also. However, I am using (24, 256, 256) shaped images because that’s the way fast.ai builds the images. I will review where the channel needs to go since I am not using the default unet_learner from fast.ai. Maybe that is causing some loss? I am getting it to train, and when I look at the results, you can tell it is developing edges. Just not to the degree of the baseline notebook.

Thanks for the idea.
Cheers,
Steve