I’ve worked through Part 1, and am partway through Part 2 of the Deep Learning course. I apologize if this gets addressed later.
I have a dataset where each observation is essentially an
n x 24 matrix. The width of the rows is constant, but the number of rows per observation can vary between 1 and ~8,000.
I am unclear on how best to write a
dataloader that can feed this matrix of values into the fastai libraries. Most of the datasets encountered in the wild and in the course deal with either images, or structured data where one line is equivalent to one observation, but that format doesn’t fit my data, so I’m not sure how to proceed.
My first thought was to replicate how the datasets that deal with images load an individual pixel, because in some sense an image is an
n x 3 matrix, where the 3 corresponds to the rbg value of a pixel.
Is there some common consensus on how to proceed with this sort of data?