I’m looking for some advice on how to use deep learning on non-image but image-like data. Similar datasets might be satellite data or medical imaging data. If you re-shape the data you can make it look like a bunch of 2D arrays or images.
I have a single beam lidar rotating and driving down a pipe.
The data supplied by the lidar unit is a 270 degree polar scan(1 radius for each degree).
Because its moving at a constant rate down the pipe I can stack these scans on top of each other so that they look exactly like an image.
I essentially have a 2-dimensional numpy array of data. It is N scans tall by 270 data points wide. I can arbitrarily cut these arrays into 1 foot sections by cutting them every 80 scans. So now I essentially have a 2D numpy array that is 80 tall by 270 wide. They are integers from 0-2000 representing millimeters.
I’m looking for some advice on what to do with this data.
I have divided up these arrays into different classification groups and I can send them through any number of transfer learning models or train a new model from scratch.
Any suggestions on how to train a model to classify these “images” would be greatly appreciated.