I’m working on a weakly-supervised learning project where I have bags of images and one or more labels for each bag (but not labels for each image).
I have developed a pytorch dataloader that loads several bags of images into a single 4D tensor, X, with sizes ( total # images, C, H, W). The dataloader also returns a 1D tensor, bagids, with size total # of images that contains an ID number for each bag.
My pytorch model expects a single input so call it like this: model( input ) where input is a tuple (X, bagids).
How can I get fastai to pass both X and bagids to the model? I’m not using the datablock API so I don’t know how to specify the number of inputs. I’ve looked at the Siamese and Image Sequences tutorials but in both of those the sequences (bags) have fixed length so it’s unnecessary to have an auxiliary tensor like bagids.
Any help or advice would be most appreciated.