UNOSAT used fastai.ai for their FloodAI model - Discussion on how to move forward

Hello all and @jeremy,

the United Nations Operational Satellite Application Programme (UNSOAT) used fast.ai to train a Unet to perform semantic segmentation on satellite imageries to detect water. We published a paper https://www.mdpi.com/2072-4292/12/16/2532 on the Remote Sensing Journal. The model based on fastai unet reached dice 0.92, accuracy 0.97, precision 0.91, recall 0.92.

You can find all the details in the paper and a Jupyter Notebook with all the steps for the training of our latest model is here: https://github.com/UNITAR-UNOSAT/UNOSAT-AI-Based-Rapid-Mapping-Service/blob/master/Fastai%20training.ipynb.

However, here a summary of the steps we followed:

  1. Dataset of 15 locations (total 58128 tiles of size 256x256)
  2. Tiling with offset (256,256), excluding the black frame around the image frame, no stride, under-sampled at the tile level by excluding all tiles that only contained background pixels.
  3. Backbone ResNet34
  4. Imagnet normalization
  5. Augmentation: get_transforms(flip_vert=True, max_warp=0.1, max_rotate=20, max_zoom=2, max_lighting=0.3)
  6. Weighted Crossy Entropy with torch.FloatTensor(weights).cuda()
  7. Training available in the notebook

We ask support to this community if you have any suggestions on how to increase the performances and generability.

Thanks a lot for your help,

Best,

Edoardo

FYI: @JosephPB

25 Likes

First, congratulations! It’s awesome to see fastai used for such an important project and in this environment!

In regards to where to improve, new SOTA optimizers and activation functions have come out that you could certainly try. This includes the Mish activation function, and the ranger optimizer (this is then teamed with fit_flat_cos rather than fit_one_cycle). I made a nice function here that can replace all the models activation functions in one line of code to simplify it if you want to try the Mish route.

In regards to Mish and ranger, I notice you guys are using fastai v1. I have some example notebooks using them here, however if you and your team wishes to come to fastai v2, I have notebooks for that as well here. Otherwise you could also of course increase the model depth to a resnet50.

I would not try to mess around with the xresnet series, as fastai’s unet doesn’t play well with them. (until someone decides to make it work :slight_smile: )

Finally I’d also perhaps try doing progressive resizing and presizing, as this can boost model performance. They were discussed in these two chapters of fastbook: presizing progressive resizing. Again well done! :slight_smile:

PS: Would also recommend putting the version of fastai used in that notebook, as currently pip install fastai will install the new version which will break all your code!

11 Likes

There have been good results using dlated convolutions on Unets. Other idea are switching completely the UNET model for something like DETR (based on transformers).
I would also like to know what are he best practices on segmentation these days.

2 Likes

Can you say more about this? What have you seen? Do you have an example you can share?

1 Like

Thanks so much for sharing @enemni, and congrats on your great results! :slight_smile:

BTW, the fastai paper is also published in an mdpi journal, so both them I would probably be grateful if you updated your kind fastai citation to point at the paper:

(I know it covers v2, and you used v1, but it’s probably still better than citing the gh repo, since that’s v2 now as well!)

Not moreso seen, just having custom backbones inside of unet isn’t easy to do. I tried getting xresnet to work and had some headaches. (However just tested this now and it does work OOTB! Sorry! Not sure if something changed along the lines since I tried it or what :slight_smile: , this may have been EfficientNet I was thinking of)

learn = unet_learner(dls, xresnet34, metrics=acc_camvid, config=config,
                     opt_func=opt)

That being said do note that only the xresnet50 is pretrained (and I haven’t had luck recreating PETs with their pretrained weights to match its performance)

6 Likes

Thank you so much for sharing this and congratulations! I’m not sure if you have been able to experiment with albumentations. I had the opportunity to experiment with albumentations on another semantic segmentation task which may help improve your results.

The full list of albumentations can be found on their website. As you are using satellite images maybe random rain, random snow or random fog instead of the regular augmentations; https://github.com/UjjwalSaxena/Automold--Road-Augmentation-Library. Another interesting albumentation is Mask dropout (which was used as part of the 1st place solution in this Kaggle competition https://www.kaggle.com/c/severstal-steel-defect-detection/discussion/114254)

For example you could have a batch that takes in multiple albumentations:

There are also other models that you can experiment with for example torchvision.models.segmentation has a range of models which can easily be used with fastai.

I have a notebook that uses albumentations in fastai, however it is based on the latest version and not v1 however I do not think it would be difficult to still use if you wanted to experiment.

5 Likes

fastai v2 includes cutout, which is rather similar. There’s also a tutorial showing how to use albumentations, but note that fastai’s augmentations are much faster, since they run on the GPU.

2 Likes

Hello @jeremy, I notified the journal. Keep you posted

1 Like

This is simply great work! Thanks for sharing this here :slight_smile:

Is there a chance that you’ll share the final trained fast.ai UNET model or the training data as well? I have a similar project starting next week and your model would provide a great baseline !!

Cheers,
Harald