I’m seeing some strange behavior with the output shape from predict_generator on my local machine. I’m working on the fisheries competition, and using one of our standard approaches - take the convolution layers from the pre-trained VGG model, and then feed the outputs to train a new set of Dense layers.
Just for simplicity, my sample training data has 16 total samples across 8 classes (2 samples per class).
Here is my code:
train_dir = DATA_DIR + '\sample\train’
batch_size = 16
batches = get_batches(train_dir, batch_size=batch_size, shuffle=False)
last_conv_idx = [idx for idx,layer in enumerate(model.layers) if(type(layer) is Convolution2D)][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
trn_features = conv_model.predict_generator(batches, batches.samples, verbose=1)
Here is what I get when I run trn_features.shape:
(256L, 512L, 14L, 14L)
I would have expected this to be: (16L, 512L, 14L, 14L)
It looks like predict_generator’s output shape is total samples * batch size?
This has been driving me crazy - what am I doing wrong? I used the above code for the state farm competition but didn’t see this issue.