Using GeoTIFF images

I’ve been trying to figure out how deal with GeoTIFF files. I can’t read them directly (I get a UnidentifiedImageError: cannot identify image file), and haven’t yet figured out how to convert them.

Both imagemagick and gdal_translate tell me that float values aren’t supported (gdal_translate says: driver doesn't support data type Float32. and as for imagemagick: Sorry, can not handle images with 32-bit samples). The GeoTIFF file is fine though, I can open it in QGIS.

Please let me know if you’ve had more success than me.

Right. It depends on how you want to process them. I did not use fast.ai V2 yet but in V1 a CNN would expect an image in 3x8bit channel (RGB) format, i.e. 24 bit.

In my case I have greyscale GeoTiffs (from LIDAR) as Float32 and I pre-process them into 8bit GeoTiffs (which is sufficient for my purpose) using Rasterio and Numpy.

3 Likes

Thanks, I will check out Rasterio! So you basically downsampled them but kept them in GeoTIFF format?

The problem with geotiffs (or images with multiple bands if I recall correctly) and fastai image classes than is that PIL doesn’t support them at all, so you have to do something different, like open them with rasterio.

One example that might work for you is here, tested to work with geotiffs from SEN12MS, both Sentinel-1 and Sentinel-2 images. Opening the images is the easy part, visualization (especially for radar images) is the hard part.

Hope this helps, rasterio is one of the best ways to open geotiffs with python. xarray is another option to use, but the method for opening geotiffs is called open_rasterio.

2 Likes

Thanks for confirming rasterio as @Archaeologist suggested. I’ve indeed been able to read the values as ndarray quite easily. Do I understand correctly that xarray also uses rasterio ?

I agree that visualization is quite tricky. Thanks for the links, will check them out.

In my current use case I am trying to open images from Google Earth Engine, they use the so-called “COG” format by default, which stands for Cloud-Optimized GeoTIFF.

Yes, xarray’s open_rasterio -method uses rasterio. I’ve used it to make interactive visualizations with holoviews, similarly than shown in this paper: Practical approach for hyperspectral image processing in python.

At least COG homepage lists rasterio as an example tool for working with COG-images, so it should work in some way.

1 Like

Wow, those are some amazing visualizations, Figures 5 and 6 look like they are sticking out of the screen!

1 Like

To add some more tricks I learned when using rasterio (also for others):

This has some useful code snippets for processing GeoTiff tiles into square-shaped TIFFs (as CNN input):

And here is my code line for converting a numpy array (float32) into 8-bit uint:

np.interp(crop, (tileMin, tileMax), (0, 255)).astype(np.uint8)

with tileMin/tileMax being the lowest/highest value in your float32.

Caution: some GeoTiffs have pixels with “NoData” values in them. I recommend that you deal with these before the conversion.

2 Likes

Good to know!

Thanks for sharing, however I think I will keep my data as floats instead of downsampling them to int. After all, when reading in the file it gets turned into floats again, right? So I think a good approach for me could be to read the TIFF files with rasterio and save them as numpy arrays.

Yes, good point, I noticed that my file contains some nan values at the edge.

So, in a nutshell, I can get my data quite easily:

import rasterio
from fastai2.vision.all import *
f = rasterio.open("path/to.tif")
d1 = f.read(1)
d2 = f.read(2)
d3 = f.read(3)
x = TensorImage([d1,d2,d3])

Does it make sense to use TensorImage, or is this not a good idea?

2 Likes

Using TensorImage is a good idea, because all image augmentations work more or less immediately. There are a couple of gotchas however:

  1. The order of channels might not be R, G, B, depending of image format. For instance, when reading Sentinel-2 images you need to something like this, might be different for google earth engine images.
red = f.read(4)
green = f.read(3)
blue = f.read(2)
x = TensorImage([red, green, blue])
  1. Lighting-related augmentations assume that images have values between 0 and 1, which might not be the case (Sentinel-2 images have typical values between 0 and 10000). You can use IntToFloatTensor(div=<number>) as item transform to fix this, just specify div to match your needs. This also means that if you train the model from scratch you either need to compute normalization stats for scaled images or just specify Normalize.from_stats(*stats / <number>) for batch transformation.
4 Likes