I am trying to segment images with bounding boxes like Jeremy is doing in the [006b_pascal.ipynb](https://github.com/fastai/fastai_docs/blob/master/dev_nb/ [link to nb_dev].06b_pascal.ipynb) notebook.
The images are huge satellite images in GeoTIF at around 400 MB per image and 4.000x15.000 resolution.
I have annotated images by hand in the same format as Jeremy has in the notebook.
What are good ways of working with such large images? Should I cut them into smaller images? If so I worry about the complexity with the bounding boxes - keeping track of position of a bounding boxes from the large image in many smaller images seems very error-prone.
How should I work with large images in relation to a bounding box problem?
That’s right, cutting your very large image into smaller tiles is the common approach. Try to use a tile size larger than your anticipated object size, and use a ‘sliding window’ to ensure capturing whole objects at some point. Eg, 256px images with 64px offsets. You can use command line tools like imagemagick to do this, but I prefer to use numpy slices. Keep track of ‘original’ positions if you need to. The other advantage is you can now control class balance by undersampling empty tiles.
Good luck- much pre/post processing to be done with satellite images. First to address is whether you want/need to use the full resolution image. For BBs (as opposed to pixel level segmentation) you might want to resize such that the sought objects lie in some Goldilocks size suitable for today’s models/gpu’s such as less than 256px.
How we can tile our WSI images? It should be manually? or we can make it by a software?