My purpose for learning, Deep Learning, is to ultimately record rock art, and then do analysis on the records.
The steps I Invision will be;
-
Identify region of interest (ROI). By recognition of petroglyphs using a mobile net model on an AIY vision bonnet. (Google’s add on bonnot for raspberry pi camera, runs tensorflow models
)
-
Using bounding box, to aim “Tiny Lidar” to scan roi. (Petroglyphs can be hard to see visually)
-
Uploading to a server with more GPU power, to perform instance segmentation.
-
Classifying of styles.
(GPS & geolocation info for view shed analysis) also included.
The last 20 minutes of fast.ai course #7 with class activation maps, is as close as I’ve gotten so far,
I’m accumulating my dataset, to practice with the dog vs cats lessons.
I am also watching bits and pieces of course 2, lessons 13 & 14, just for entertainment really!
Of course mask-rcnn is the “state of the art” for instance segmentation. & pytorch models exist.
Zeppelzauer, M., Poier, G., Seidl, M., Reinbacher, C., Breiteneder, C., Bischof, H., Schulter, S., 2015, Interactive Segmentation of Rock-Art in High-Resolution 3D Reconstructions, Proceedings of the Conference on Digital Heritage 2015. Granada, Spain 10/2015.
is my starting point inspiration.
They used EDM enhanced deviation maps, to segment with supervision.
I also like the effect of the method from,
Automatic Color Image Segmentation Using a Square Elemental Region-Based Seeded Region Growing and Merging Method Hisashi Shimodaira