I have images with dimensions of 10000x64. I would like to train with these images using convnet transfer learning with, say, resnet50.
I cannot pad to 10000**2 for reasons of speed and memory. I also cannot crop the bigger dimension as the information loss will be too severe.
Is it possible to do transfer learning directly with these non-standard images?
What are the images? Are, for instance, 100x64 slices of the image meaningful if labelled?
Unfortunately not. They can be thought of as weird, big photographs of , say, dog breeds that cannot be squared simply due to memory and CPU constraints and that do no make much sense if sliced.
Perhaps I could slice them anyway and input 100 channels as the input tensor? Might work but the transfer learning might be challenging as it generally requires three channel input.