Matrix input and custom datasets

Hey guys

I’ve worked through Part 1, and am partway through Part 2 of the Deep Learning course. I apologize if this gets addressed later.

I have a dataset where each observation is essentially an n x 24 matrix. The width of the rows is constant, but the number of rows per observation can vary between 1 and ~8,000.

I am unclear on how best to write a dataset / dataloader that can feed this matrix of values into the fastai libraries. Most of the datasets encountered in the wild and in the course deal with either images, or structured data where one line is equivalent to one observation, but that format doesn’t fit my data, so I’m not sure how to proceed.

My first thought was to replicate how the datasets that deal with images load an individual pixel, because in some sense an image is an n x 3 matrix, where the 3 corresponds to the rbg value of a pixel.

Is there some common consensus on how to proceed with this sort of data?

Dear Nick,

this sounds like sequence data and could be used with RNNs?

But maybe someone with more experience can verify if this is a suitable approach.

Best regards
Michael

Yeah I also thought of RNNs as the most obvious way forward, but I am only familiar with RNNs as a way to make predictions, as opposed to classifying. Although I recall one of the NLP lessons dealt with sentiment analyses, and I guess that’s classification.

Also yes, it is sequence data, a time series list of pitches and timbres.

I’m currently going through lesson 6 and 7 but so far didn’t implement another RNN approach.
The “text” RNNs from the lessons are making predictions on the next character/word.
If you modify the input to accept a vector with 24 dimensions and the output according to your classes you should have a system that fits your application (however, this is of course much easier said than done). :slightly_smiling_face:
Are you using open data?