Geospatial Deep Learning resources & study group

how about https://solaris.readthedocs.io/en/latest/tutorials/notebooks/api_masks_tutorial.html#polygon-footprints ?

In the tutorial, geojson files are used to create binary masks, but the original images are extracted from tif files. From Microsoft CanadianBuildingFootprints dataset description, I see that all we need to achive both images and labels is the given geojson files. I’m new to this type of file and truly want to hear advices from you guys.

1 Like

HI everyone.
I’m looking for a dataset that groups aerial imagery of roofs for the city of Mexico , does anyone knows where i can get my hands on it ?

Any help would be much appreciated, thank you all.

1 Like

Check this please

Hi Aninda
Thank’s a lot for redirecting me toward that reource , that is exactly what i was looking for
Thank’s again.

sorry, completely missed your question-- looking at the dataset, there’s only the geojson masks, so what you would need to do is to find a source of satellite imagery for canada that is georeferenced, ie the tiff files have lat lng info. you can read in the geojson file into a geopandas dataframe and find the min max lat/lng extent, and find a satellite image that has those extends-- you can then pass this as a parameter into the solaris function I linked to and it will create the correct mask file for you for that satellite image

1 Like

Thanks for the explanation. It’s strange that they didn’t provide the imageries as well.

Pardon my total ignorance in advance. I’m taking the v4 course currently and I downloaded a dataset called PatternNet and wanted to compare results to benchmarks. I’m a total novice when it comes to DL and geospatial. In the paper they benchmark using “Average Normalized Modified Retrieval Rank” (ANMRR) as a metric. I’m assuming this metric is common in this field but it’s my first time coming across it. I guess I have to create a custom metric. Google results for ANMRR are kind of spotty. I found a mathlab implementation that I guess I can feel my way through but was wondering if anyone knows of a decent resource.

it’s most likely the images have copyright attached to them which means MS can’t open source it together with the footprints. if you don’t have to use Canada specifically (e.g. if you just want to do a project in building segmentation) you might want to look at the Spacenet datasets instead

@daveluo
I have been trying to get your notebook working and have been facing an issue when using solaris mask function

fbc_mask = sol.vector.mask.df_to_px_mask(df=cropped_polys_gdf,
channels=[‘footprint’, ‘boundary’, ‘contact’],
affine_obj =tfm, shape=(tile_size,tile_size),
boundary_width=5, boundary_type=‘inner’, contact_spacing=5, meters=True)

I get an error saying

UnboundLocalError: local variable ‘out_crs’ referenced before assignment.

This error isn’t allowing me to generate the mask tiles. I spent enough time trying to debug this issue but couldn’t find an answer. Can you please tell me how to fix this?

1 Like

@daveluo
Thank you so much for the Google Colab segmentation notebook and detailed walk-through of the whole process!

I was able to successfully implement my own data in the process (30cm Worldview3 imagery + building footprints). However, when training, the validation and training loss only drops down to around 0.55. I tried training with more epochs (up to 100), deeper network (resnet50) and bigger batch size (bs=32). Unfortunately there was little to no improvement.

Any ideas what could cause this and how to improve the training process?

Thanks!

Hello everyone I’m working on building footprint detection but I want to dive more into pre-processing techniques as well as common best practices in ESDA. Could you point me some starting code I can use

This must be something to do with projection of shapefile, did you try passing reference images and then create?

Does anyone working in ‘crop classification with multi-temporal satellite imagery’ topic to classify the crop type in agriculture fields.

I am doing a little bit of work in this area. Cannot talk about details of the work but would be happy to discuss recent methods and papers

Hey guys I was wondering if anyone here have experience with Pix2Pix(or other Generative adversarial networks) and geospatial data. For those who doesn’t know, GANs is a model that’s trained on real data and then produced “fakes” that are plausibly real.

I’ve tried training a Pix2pix model on a small training set of around 300 images cropped in different ways from a ~4000x4000 pixel image. The image is the topography of a landscape where each pixel represents a height. 4000x4000.
The goals is to translate from topography without snow to topography with snow.

During training when it generates samples it seems to reproduce the height profile of the real image well. training sample.
But when testing on an image from a validation data set the resulting height profile is wrong but with somewhat the same shape. validation/testing sample

So the pix2pix model seems to generalize well for training images but not unseen images. Does anyone know of methods or other models that possibly might give better results for my type of image translation task? I asked a guy who implemented pix2pix for python and he suggested that needed more training data. Any help would be greatly appreciated

Best,
Johan

Hi Johan, probably you will need more than just 300 samples for such a complex task. Have you thought in randomly cropping the big images “on the fly” (inside your DataGenerator)? Moreover, you can apply data augmentation techniques such as dihedral to increase the sample diversity.

Hey all,

Thought I would share with this group a project I worked on with satellite imagery after completing Part 1 of the course. I used the Inria dataset to build a semantic segmentation model that can identify buildings in satellite images. The post has some details on a few tricks for working with large format satellite images as well as some things I did to customize the network architecture. Hope somebody finds it interesting or useful!

2 Likes

Hi @everyone

Please, I would like some guidelines, I am trying to understand how i can perform Object Extraction from satellite Imagery and yet I don’t where to start. Please, I would like some guidelines to extract specific information from object detected from the satellite imagery.

Thanks

Hi @joell001,
I’ve been working on image segmentation (closely related issue) from satellite images using Fastai or pure PyTorch and have published some Medium posts with codes and links to other resources that could help you. Here is the link to some of related stories:




Mauricio

7 Likes