Can you train VGG on 1D arrays

Hello,

From what I’ve gathered, image inputs to VGG are converted to numpy arrays for training/prediction. I’m currently working on a project where I have data stored as 1D numpy arrays (I can make them 2D as (1, X) arrays) that were not taken from images. Is it possible to use VGG to train/predict data of this kind? If so, I assume I must bypass the requirement of the input data being in an image format (jpg, png, etc.).

Any advice about how to go about doing this would be highly appreciated. Thanks in advance.

Can’t use VGG, but can use a 1d CNN.

1 Like

Can you elaborate on this?
I am training a CNN regression from arrays of size 1x400, actually using:

md = ImageClassifierData.from_arrays(PATH, (trn,y_trn), 
                                           (val, y_val), 
                                           bs=64, 
                                          continuous=True)
learner = ConvLearner.from_model_data(net, md)

where net is:

net = nn.Sequential(
    nn.Conv1d(1, 20, kernel_size=9, stride=2, padding=0),
    nn.LeakyReLU(),
    nn.BatchNorm1d(20),
    nn.Conv1d(20, 40, kernel_size=9, stride=2, padding=0),
    nn.LeakyReLU(),
    nn.BatchNorm1d(40),
    nn.Conv1d(40, 80, kernel_size=9, stride=2, padding=0),
    nn.LeakyReLU(),
    nn.BatchNorm1d(80),
    nn.Conv1d(80, 160, kernel_size=9, stride=2, padding=0),
    nn.LeakyReLU(),
    nn.AdaptiveMaxPool1d(10),
    Flatten(),
    nn.Linear(10*160, 1)
).cuda()

I found that learner.summary() does not work for single channel “images”, I think the problem comes from:

def summary(self):
        x = [torch.ones(3, self.data.trn_ds.cats.shape[1]).long(), torch.rand(3, self.data.trn_ds.conts.shape[1])]
        return model_summary(self.model, x)

Expects 3 channel images. I don’t know if my approach of using the image data loader is a good idea.
Any insights on how can we deal with 1d vectors, to do regression using fastai?
Sincerely,
Thomas