First layer of model has wrong number of dimensions?

Hello there,

First off, love the course!

I’ve been having trouble with running .fit or .predict on my model for the State Farm competition. I’m just using the simplest model from the lesson to demonstrate the error, but when I have:

model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation=‘softmax’)
])

I am able to successfully run:

model.compile(Adam(lr=0.0001), loss=‘categorical_crossentropy’, metrics=[‘accuracy’])
model.fit_generator(batches, batches.nb_sample, nb_epoch=1, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model_train_feat = model.predict_generator(batches, batches.nb_sample)
model_val_feat = model.predict_generator(val_batches, val_batches.nb_sample)

However, when I run:

model.fit(model_train_feat, trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(model_val_feat, val_labels))

I get:

Error when checking model input: expected batchnormalization_input_8 to have 4 dimensions, but got array with shape (1500, 10)

I have run model.summary and first layer is this:

Layer (type) Output Shape Param # Connected to

batchnormalization_28 (BatchNorm (None, 3, 224, 224) 896 batchnormalization_input_8[0][0]

I don’t understand why it is expecting 4 dimensions, or how to change this. I have spent several hours reading forums and documentation but can’t seem to figure it out. Any advice would be much appreciated!

Thank you!

expected batchnormalization_input_8 to have 4 dimensions, but got array with shape (1500, 10)

This says it all. “model_train_feat” is clearly not an image but instead a precomputed dense output. Therefore you need to either pass in an image or change your model accordingly.

Sean,

Thank you for your reply. I’m still not quite sure I understand.

I pull in my batches like this:

gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)

batches = get_batches(path+‘train’, gen_t, batch_size=batch_size)
val_batches = get_batches(path+‘valid’, batch_size=batch_size*2, shuffle=False)

and model_feat is just model.predict_generator(batches, batches.nb_sample).

This is directly from the State-Farm Notebooks. I guess I am confused as to what I am doing differently/wrong, and how I am able to run predict_generator and fit_generator and get results, but not able to run predict or fit.

Thanks again for your help. I am not sure how I would pass in an image or change my model to expect different than an image. I tried passing in the batches directly and also tried building up the model not using any helper methods in case they were dated. I still received the error.

Okay WOW, your comment cued me to look back through Lesson 1’s notebooks and I found .predict and .fit was being used on the images retrieved from using next(batches) instead of on the batches themselves. It also made me realize the difference between .predict and .predict_generator, and how predict_generator accepts inputs from a data generator, while .predict accepts inputs from just the data.

I spent way too much time figuring this out lol. Feel pretty dumb.