Geospatial Deep Learning resources & study group

For anyone who’s looking for an interesting challenge. Our non-profit research group openclimatefix.org is working to map solar PV globally using ML: http://jack-kelly.com/blog/2019-07-09-solar-pv-mapping

There are several groups mapping different areas. But there is lots of help needed to map small roof top solar panels. For example in the UK we have found access to 25cm aerial imagery, and in the USA there are a few groups doing this. There is also a global utility scale dataset being released soon.

But we haven’t got any other countries in progress! If you’d like to have a go at solar PV detection (mainly small roof top panels) using ML for a particular country, we invite you to.

The biggest hurdle is usually imagery. We’ve found Mapbox has high resolution global coverage, but for many countries the quality is too low. So it’s worth exploring domestic satellite and aerial imagery providers. Some may be open to giving access to data for educational or research purposes.

Please pay particular attention to imagery licensing. We want to be able to aggregate and freely distribute PV panel maps for research and impact climate change. This includes pushing results to OpenStreetMap. For example, results derived from Google Maps imagery cannot be openly distributed. But Mapbox explicitly states you can use their imagery to improve OpenStreetMap.

There is some great code to base your work on, such as:
http://web.stanford.edu/group/deepsolar/home
Using DeepSolar with Mapbox: http://openclimatefix.discourse.group/t/solarpaneldatawrangler/26
SolarMapper https://arxiv.org/pdf/1902.10895.pdf

I’d you’d like to chat just drop me a DM twitter.com/dctanner

6 Likes

Thanks @dctanner for posting this opportunity! I’ve added your links to a new Opportunities section at the top wiki post with other new resources and links. Please feel free to add/edit the wiki directly with more information.

Another great place to find and make use of open-source overhead imagery, especially drone/aerial imagery which works well for roof-top solar is at:

All imagery hosted there should be CC-BY-4.0: http://openaerialmap.org/about/

Random question for those who are experienced with geospatial data. I’m interested in making a model to detect topes (pretty giant speed bumps all over Mexico) from satellite images. Before I dig deeper, is this possible from satellite or are the objects going to be too small to be detectable? (They are often colored but the most interesting ones would be uncolored). Sorry I haven’t done my homework on this one but I would rather hear if it’s possible before going down another rabbit hole. Thanks.

Spot 6 & 7 (https://eos.com/spot-6-and-7) got 1.5 m resolution, but I guess you would need to check a few examples to see if it is really feasible. I guess the uncoloured ones may be a problem to see from above.

It’s a kind of task that would be easier using data collected from cars, like Tesla’s that have cameras and sensors recording all the time. Or another way around, for an app like google maps or a similar one with a high number of users, by allowing users to input the information of where there is a bump.

Thanks, yeah allowing users to input information is our ultimate solution, especially because topes come and go, but we had the idea to bootstrap with satellite data to capture maybe 20-40% of them to make the app somewhat useful to begin with. Otherwise it’s hard to get users to input.

Hi everyone,

I’m in the process of creating a fully self-contained Colab notebook to demonstrate an end-to-end workflow for building segmentation from overhead imagery. This means covering not only the DL model training and inference part but also all of the less-covered steps and interstitial stuff to go from data creation to model creation to inference on new imagery to evaluation against ground truth.

This is very much a work-in-progress and still incomplete but I’m happy to share with you this earliest version for your feedback. v1 of blog & notebook is published on 7/25! It is set up to be fully executable in Google Colab either from beginning to end or within each workflow section/step independently. Package dependency installations are all working as of 7/25 and I tried to include download links to the generated artifacts at each step so you can pick up and start from any point:

Updated 7/25, published Medium post:

Updated 7/25, published Colab notebook v1 link:

https://nbviewer.jupyter.org/github/daveluo/zanzibar-aerial-mapping/blob/master/geo_fastai_tutorial01_public_v1.ipynb

Any feedback about the post or notebook is much appreciated! DMing me here or commenting on Medium/github both work. Much thanks!

Some highlights:

  • Added 7/22: Wrote much more commentary explaining each major step with references to learn more.

  • Added 7/22: demo’ing SpatioTemporal Asset Catalogs and browser for visualizing geospatial datasets for ML:

  • Updated 7/18: creating complex-shaped training and validation AOIs with geojson.io:

  • Updated 7/22: checking & removing overlapping tiles between train and validation:

  • Added 7/18: cleaning up invalid and overlapping geometries

  • tiling geoTIFFs and geojson files to create image and mask pairs in 3 channels with supermercado, rio-tiler, geopandas and solaris:
    img_and_mask

  • customizing fastai’s default segmentation approach to handle 3-channel target masks:
    3ch_targets

  • Updated 7/18: custom loss functions and metrics combining BCE, Focal Loss, Dice Loss or any other loss function with adjustable weighting by loss function and by channel:

class MultiChComboLoss(nn.Module):
    def __init__(self, reduction='mean', loss_funcs=[FocalLoss(),DiceLoss()], loss_wts = [1,1], ch_wts=[1,1,1]):
        super().__init__()
        self.reduction = reduction
        self.ch_wts = ch_wts
        self.loss_wts = loss_wts
        self.loss_funcs = loss_funcs 
        
    def forward(self, output, target):
#         pdb.set_trace()
        for loss_func in self.loss_funcs: loss_func.reduction = self.reduction # need to change reduction on fwd pass for loss calc in learn.get_preds(with_loss=True)
        loss = 0
        channels = output.shape[1]
        assert len(self.ch_wts) == channels
        assert len(self.loss_wts) == len(self.loss_funcs)
        for ch_wt,c in zip(self.ch_wts,range(channels)):
            ch_loss=0
            for loss_wt, loss_func in zip(self.loss_wts,self.loss_funcs): 
                ch_loss+=loss_wt*loss_func(output[:,c,None], target[:,c,None])
            loss+=ch_wt*(ch_loss)
        return loss/sum(self.ch_wts)

Much thanks to the creators and contributors of these newest and greatest geospatial tools which I’ve used extensively here:

I look forward to hearing your feedback on what parts are not working, could be made clearer and cleaner, or steps I missed or should explain in more detail. The notebook currently sits as a secret gist but the plan is to polish and publish the Colab notebook(s) into a public and open-sourced repo after incorporating your initial feedback. I’ll also write an accompanying blogpost to help guide folks through it. Done!

Also to note: the focus of this notebook is to demonstrate a workflow so I did not extensively train the model or do much to optimize the segmentation results itself. There is much performance to be gained!

Dave

26 Likes

Thank you very much Dave for the interesting workflow including notebooks.

Since I have already predicted land cover segmentations (inference learner of fastai), I would like to convert these into georeferenced polygons. In your workflow you use:
“mask2poly = sol.vector.mask.mask_to_poly_geojson(…)” (Solaris) with three classes. But the conversion of only two classes (Non-Segmentation / Segmentation) does not work for me.

There are probably other techniques to convert arrays into georeferenced polygons? Thank you in advance.

I had 2 classes, building and background as geotiff mask created after stitching predicted tiles, but this solaris function worked well for me, the only problem that I faced was it does not dump it to disk even if you define output_path with filename.

You can dump the geojson file by to_file command.
Hope this helps.

1 Like

Thank you very much for your advice. The command

x.to_file (" x.json “, driver =” GeoJSON ")

helped me to generate a readable file. Due to the following lines, the polygons can not yet be converted correctly:

inference_learner = load_learner(x, x)
inference_learner.model.float()

and

Maybe someone can help me? Thank you in advance.

A bit hard to tell from your screenshot but it looks like mask_to_poly_geojson did create a polygon out of your prediction output (im[:,:,0]) plotted as the blue square with the white holes in middle & top-left.

My guess is that you have to change your value in the bg_threshold argument of the solaris function to get the mask correctly for whatever values of the array you’re trying to polygonize as your positive class. Here’s the doc for that function and its arguments.

Note also that because your polygonized output isn’t georeferenced yet (the mask2poly1.plot axes are in pixel values), the plot of the polygon is vertically flipped in display relative to how your segmentation result (prediction[0].show()) is plotted.

1 Like

Many thanks Dave for your suggestions. Now the transformation to georeferenced polygons works. Attached you will find an extract of the inference learner including the use of the solaris function (solaris.vector.mask.mask_to_poly_geojson):

3 Likes

Great notebook. I have one question towards the fastai API that you might be able to answer. In your case, I think you use multichannel images with a single binary class.
For my project, I have single-channel images at the moment, but several classes for segmentation (like in this camvid tutorial). Do you know a way to easily one-hot encode the multiclass label (BxCxHxW instead of current Bx1xHxW)? I hope it’s not too off-topic. Thank you

1 Like

The DOTA dataset might prove useful. One of the annotated classes is storage tank. A recent update is also provides pixel level annotations:

Paper:

Datasets:

4 Likes

Been looking at the xview2 competition, which is about identifying buildings in satellite images and classifying the damage due to natural disasters. Thanks @daveluo for your building segmentation notebook, really helpful here.

In the competition’s baseline solution, they use one model to identify the buildings. Then they chop the image up into small images, each just about encompassing an identified building. Then, these small images are passed to another model which classifies the damage level.

Is there a good reason that one cannot, or shouldn’t, do all this in one step, with one model, by treating the damage levels as just additional categories to segment an image into? This way there will be no need to chop the original image up into smaller bits for damage classification.

1 Like

Is there a good reason that one cannot, or shouldn’t, do all this in one step, with one model, by treating the damage levels as just additional categories to segment an image into?

There isn’t one, but discussions in the forum suggests that the best results are obtained from a two step process. Also, the classification scoring is calculated on the pixels marked as buildings in the pre diaster dataset, so if you segment on the post disaster data you are more likely to miss relevant pixels

2 Likes

Is the default loss function for fastai’s unet learner just the cross-entropy loss for all pixels? Do you know how this compares with the loss function you use in your notebook? Are many buildings missed, for example? thanks

This is an interesting paper on application of ML to predict volcanic eruptions from satellite images

3 Likes

Hi @immarried, thanks for checking out the notebook and sorry for my slow reply. Yes, the default loss function for fastai’s unet learner is a pixel-wise cross entropy loss, specifically FlattenedLoss of CrossEntropyLoss(). You can confirm by seeing what your printout of learn.loss_func is.

I’ve found that a combination of cross entropy (or focal loss which is a modification of CE loss) + dice loss has always done better for my segmentation tasks, whether with 1 binary target channel or 3, than cross entropy alone. This seems to be a pretty consistent experience based on what the top-5 finalists for the last SpaceNet building challenge finalists did: https://github.com/SpaceNetChallenge/SpaceNet_Off_Nadir_Solutions

Also re: your suggestion for the xView2 challenge to do a multiclass segmentation of damage levels, I agree with @wwymak’s point that the pixel misalignment between the pre- and post-images may throw off your segmentation performance significantly. Worth a try though!

3 Likes

Did anyone participate in the Open AI Caribbean Challenge hosted by DrivenData ?
I am looking to discuss for a better solution and expand my knowledge if anyone participated.
I scored 11th position and really happy about it as this was my first ever competition!
All thanks to @daveluo and his notebooks which helped me throughout.

6 Likes

I came 21st, wasn’t able to do any work on it for the past three weeks :frowning: (actually had time again this morning, but comp closed) so fairly happy with result.
Fastai codebase, resnet, densenet, efficientnets

For me too, thanks to @daveluo for intro to solaris and rasterio which was critical to dataprep.

3 Likes