Geospatial Deep Learning resources & study group

Thank you very much Dave for the interesting workflow including notebooks.

Since I have already predicted land cover segmentations (inference learner of fastai), I would like to convert these into georeferenced polygons. In your workflow you use:
“mask2poly = sol.vector.mask.mask_to_poly_geojson(…)” (Solaris) with three classes. But the conversion of only two classes (Non-Segmentation / Segmentation) does not work for me.

There are probably other techniques to convert arrays into georeferenced polygons? Thank you in advance.

I had 2 classes, building and background as geotiff mask created after stitching predicted tiles, but this solaris function worked well for me, the only problem that I faced was it does not dump it to disk even if you define output_path with filename.

You can dump the geojson file by to_file command.
Hope this helps.

1 Like

Thank you very much for your advice. The command

x.to_file (" x.json “, driver =” GeoJSON ")

helped me to generate a readable file. Due to the following lines, the polygons can not yet be converted correctly:

inference_learner = load_learner(x, x)
inference_learner.model.float()

and

Maybe someone can help me? Thank you in advance.

A bit hard to tell from your screenshot but it looks like mask_to_poly_geojson did create a polygon out of your prediction output (im[:,:,0]) plotted as the blue square with the white holes in middle & top-left.

My guess is that you have to change your value in the bg_threshold argument of the solaris function to get the mask correctly for whatever values of the array you’re trying to polygonize as your positive class. Here’s the doc for that function and its arguments.

Note also that because your polygonized output isn’t georeferenced yet (the mask2poly1.plot axes are in pixel values), the plot of the polygon is vertically flipped in display relative to how your segmentation result (prediction[0].show()) is plotted.

1 Like

Many thanks Dave for your suggestions. Now the transformation to georeferenced polygons works. Attached you will find an extract of the inference learner including the use of the solaris function (solaris.vector.mask.mask_to_poly_geojson):

3 Likes

Great notebook. I have one question towards the fastai API that you might be able to answer. In your case, I think you use multichannel images with a single binary class.
For my project, I have single-channel images at the moment, but several classes for segmentation (like in this camvid tutorial). Do you know a way to easily one-hot encode the multiclass label (BxCxHxW instead of current Bx1xHxW)? I hope it’s not too off-topic. Thank you

1 Like

The DOTA dataset might prove useful. One of the annotated classes is storage tank. A recent update is also provides pixel level annotations:

Paper:

Datasets:

4 Likes

Been looking at the xview2 competition, which is about identifying buildings in satellite images and classifying the damage due to natural disasters. Thanks @daveluo for your building segmentation notebook, really helpful here.

In the competition’s baseline solution, they use one model to identify the buildings. Then they chop the image up into small images, each just about encompassing an identified building. Then, these small images are passed to another model which classifies the damage level.

Is there a good reason that one cannot, or shouldn’t, do all this in one step, with one model, by treating the damage levels as just additional categories to segment an image into? This way there will be no need to chop the original image up into smaller bits for damage classification.

1 Like

Is there a good reason that one cannot, or shouldn’t, do all this in one step, with one model, by treating the damage levels as just additional categories to segment an image into?

There isn’t one, but discussions in the forum suggests that the best results are obtained from a two step process. Also, the classification scoring is calculated on the pixels marked as buildings in the pre diaster dataset, so if you segment on the post disaster data you are more likely to miss relevant pixels

2 Likes

Is the default loss function for fastai’s unet learner just the cross-entropy loss for all pixels? Do you know how this compares with the loss function you use in your notebook? Are many buildings missed, for example? thanks

This is an interesting paper on application of ML to predict volcanic eruptions from satellite images

3 Likes

Hi @immarried, thanks for checking out the notebook and sorry for my slow reply. Yes, the default loss function for fastai’s unet learner is a pixel-wise cross entropy loss, specifically FlattenedLoss of CrossEntropyLoss(). You can confirm by seeing what your printout of learn.loss_func is.

I’ve found that a combination of cross entropy (or focal loss which is a modification of CE loss) + dice loss has always done better for my segmentation tasks, whether with 1 binary target channel or 3, than cross entropy alone. This seems to be a pretty consistent experience based on what the top-5 finalists for the last SpaceNet building challenge finalists did: https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions

Also re: your suggestion for the xView2 challenge to do a multiclass segmentation of damage levels, I agree with @wwymak’s point that the pixel misalignment between the pre- and post-images may throw off your segmentation performance significantly. Worth a try though!

3 Likes

Did anyone participate in the Open AI Caribbean Challenge hosted by DrivenData ?
I am looking to discuss for a better solution and expand my knowledge if anyone participated.
I scored 11th position and really happy about it as this was my first ever competition!
All thanks to @daveluo and his notebooks which helped me throughout.

6 Likes

I came 21st, wasn’t able to do any work on it for the past three weeks :frowning: (actually had time again this morning, but comp closed) so fairly happy with result.
Fastai codebase, resnet, densenet, efficientnets

For me too, thanks to @daveluo for intro to solaris and rasterio which was critical to dataprep.

3 Likes

Congrats @chaitanyaarora and @adrian on your excellent results!

Glad to hear my work could be of help. I didn’t participate in the Caribbean challenge but am interested to learn more about your approaches, especially re: your geodata preprocessing and if/how you’ve adapted fastai from stock usage.

For a new geospatial ML challenge to dive into, check out:

Just launched, running to 3/16/2020. Also by DrivenData along with Azavea and GFDRR Labs.

$15K in prizes across 2 tracks:

  1. Semantic Segmentation track: $12K for top 3 open-source building segmentation models developed on an extensive and diverse new dataset of drone imagery and OpenStreetMap building footprints from 10+ African cities/regions:

  1. Responsible AI track: $3K for top 3 ideas/projects examining the applied ethics in developing and using AI systems for disaster risk management. Very open submission format - can submit anything from Jupyter notebooks to speculative fiction - and you’re not required to also participate in the segmentation track. Segmentation participants do have to submit at least once to this in order to qualify for the cash prizes in that track.

I’m one of the challenge organizers so let me know on the participants’ forum if you have any specific questions!

4 Likes

Thanks for the heads up on the challenge, may be able to find some time to have a go.

My approach for the Caribbean challenge was:

Rasterio to crop tif for each roof polygon to its own image. Use cv border or reflect to pad to square.
Try different transforms in fastai, basic crop, rotate, zoom, brightness worked best.
Predict roof type using resnet50, (using all data) densenet121,efficientnet. Use means ensembling. This base approach got me best result.

Other approaches:
Pred rooftype+region (concatenate class names)/pred roof type +country.
Pred the non verified data (2 regions), then use the results that a bunch of models all predict same class with say a 80% cutoff. Use these preds for training + verified data.
Ricap/mixup/cutmix -didnt work well
Even up class imbalance-didnt help
Autodetect blurry images and sharpen-didnt help.
Imgaug transforms.
Predicting test data then adding confident preds to training data.

There was spatial component to roof types per tiff that i thought could help but xy too difficult for neural net to predict, and didnt find a good way to add spatial distribution to models.

2 Likes

For the Caribbean challenge my approach which worked best was:

  1. Cropping data using Rasterio as in this notebook
  2. Training and testing on 3 different models using resnet50 for each country individually
    a. For Colombia used normal approach to train data
    b. For Guatemala as there was no image in incomplete class i assigned 0 to it while testing and classified my images in 4 classes only
    c. For St. Lucia i over fitted my training data using all the images provided for the country

Things that did not work for me:

  1. Normalizing the number of images in each class to be identical by rotating/changing colors so that my model is not biased towards a particular class.
  2. Setting up a single model for whole of my training data.
  3. Cropping images using the mask method in rasterio with crop=True
  4. Over fitting my model for the other two countries
  5. Manually classifying the most confused images
4 Likes

I just posted this (Share your work here ✅) but I figured I should also share it here :slight_smile:

Article: https://authors.elsevier.com/a/1aN0a3I9x1YsQn
Code and data: https://github.com/mnpinto/banet

Feel free to ask me more about this work! :slight_smile:

Edit: Updated the sharing link that allows for free access up to 27 February.

5 Likes

This might be interest claiming some state of art results.
HMANet: Hybrid Multiple Attention Network for Semantic Segmentation in Aerial Images

1 Like

This is awesome. Thanks for sharing and congrats on the research results and publication! Really interesting approach to unet segmentation with separate spatial and temporal convs + LSTM module…

Would be curious to see how well this does for land cover mapping and change detection more generally (i.e. on sentinel 2 or landsat images over multiple seasons/years). Have you given anything like that a go?

Not quite the same but ICYMI, you may find this new IEEE GRSS Data Fusion competition interesting…land cover classification with low-res labels: http://www.grss-ieee.org/community/technical-committees/data-fusion/

2 Likes