Howdy!
I am working on implementing a time series image conversion using visibility graphs to then train a cv model on those converted images.
Since a single PPG signal gives multiple images, I am guessing that for the model to have the full idea and be able to do inference on a given signal, I’d need to treat the task as a Video Classification task.
I was wondering how should one go about that?
Sorry if that’s a silly question.
EDIT: So now I have finished working on my Patient class.
The class in question currently takes a PPG signal and then converts it to an image using VGTL-net according to the paper of the same name.
My Patient class holds 4 image sequences:
- angry_images : tuple(sequence, label)
- sad_images : tuple(sequence, label)
- joy_images : tuple(sequence, label)
- relaxed_images : tuple(sequence, label)
I am still trying to learn the ropes around this DL stuff. But I assumed that the above format would be the most convenient way to process the data. There’s a further step to do to ensure all sequences are the same length, but that’s for another day.
Right now I am just trying to get the logic. Did I mess up somewhere? How should I build from there.