Lesson 1 discussion

I AM ALSO GETTTING THE SAME ERROR …BELOW IS MY KERAS.JSON FILE

{
“image_dim_ordering”: “th”,
“epsilon”: 1e-07,
“floatx”: “float32”,
“backend”: “theano”
}

I am Python 2.7 & Keras is 2.0.2 & Theano is 0.9.0

ValueError: The shape of the input to “Flatten” is not fully defined (got (0, 7, 512). Make sure to pass a complete “input_shape” or “batch_input_shape” argument to the first layer in your model.

HOW Can I Solve this…

@sahilk1610 Keep looking carefully in this thread, Jeremy provides a link to a script written by a class member.

@akshaylamba: welcome to 80% of learning Keras. Meaning, figuring out dimensions.

From the documentation, there’s an example:

# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)

The important part is input_shape=(16,). Keras needs to know what the dimensions are, that you are going to feed it. How else could it make an appropriately sized matrix of weights for you to fit?

If you get the error the shape of the input to “Flatten” is not fully defined (got (0, 7, 512). Make sure to pass a complete “input_shape” or “batch_input_shape” argument to the first layer in your model., then it’s somewhat clear that you haven’t defined this properly in some way or another.

Hope this helps.

PS all caps in posts is usually interpreted as yelling on forums. Your problem is not that bad…so there’s no need to shout.

1 Like

Hi,

I am getting this error. Any ideas? Thanks a lot.

Hello,

I have the identical scenario. Were able to resolve this, if so how?

Thanks in advance for your thoughts.

Best regards,
Bob

It appears that you’re using Keras 2.0 whereas the notebook assumes Keras 1. You could reinstall the previous version of Keras, or you could make the necessary modifications to the notebook by referring to this post:

1 Like

it works for me. Thanks a lot, Zarak.
This python-howard jungle is spooky and amazing to me.

1 Like

As a reference, training models would be be faster on the on p2 instance?

For me on the P2 instance, training the model took 11 mins (663 seconds), Wooow!

23000/23000 [==============================] - 663s - loss: 0.2209 - acc: 0.9716 - val_loss: 0.1513 - val_acc: 0.9840

I am confused on how to handle the probs result returned by the vgg.test method.

We have this val_batches, probs = vgg.test(valid_path, batch_size = batch_size) that is returning val_batches, probs .

But when we want to turn the probs into an array of guessed categories we take only the first column by doing

our_predictions = probs[:,0] our_labels = np.round(1-our_predictions)

My question is:
If each column of the probs var is the percentage to be a part of that particular class, why aren’t we just doing this:

our_predictions = probs[:,1] our_labels = np.round(our_predictions)

I tried to follow Lesson 1, and practice with the notebook. I suggest to update the code vgg16.py, utils.py and notebooks to Keras 2.x and tensorflow backend, in order to mantain the simplicity of the course.

There are a few details in the parameters of the methods, channels of the images (first or last) that are not implemented in the current theano vesion, so they add difficulty to follow the course.

I have updated the vgg16.py code to work on Keras 2.x and both in Theano and Tensorflow with the channels_first option, but it fails with the channels_last option is used.

Anyway, I think now would be the course much easier to follow if the code was updated.

Regards.

1 Like

I solved this issue by the next two lines:

from keras import backend as K
K.set_image_dim_ordering(‘th’)

Hi - I am not reaching the end of my batches. Hangs before the last one. Tried different batch sizes. Can you help me? Thank you very much.

Hi everyone, i have done the vgg.test on the test data and was able to extract the ids and predictions. However, I noticed that the batch.filenames give the file names in random order(different than what is there in the data directory). I am not sure if it would be right to assume that the prediction array is also in the same sequence of the filenames returned by the batch.filenames or is there any parameter in the vgg.test to play around with?
any clue to the above query would be helpful.Thanks in advance :slight_smile:

this was answered in the lesson 2 video. cheers :slight_smile:

I’m having some trouble with the vim .bashrc command to set the alias in default…

I can see that I have saved the source command into .bashrc all right,
but alias returns nothing; only after I type $ bash does ‘alias’ return the commands…

Please help. Or if you could, point me to where I could look. Thank you!

Mac OS Sierra 10.12.4, using Terminal

Isn’t it rather awkward to observe that the run time for fitting the Vgg16 model on the redux dataset to be only 221 seconds while the prediction runtime (the call to vgg.test) takes over 30 minutes!

I’m running on my local GPU: GTX 1060 6GB
Other Specs:

CPU: i5-6600K
OS: Windows 10

My call to Vgg.test

#batch_size = 64
batches, preds = vgg.test(path+'test', batch_size = batch_size)

I’m running Anaconda 3.6 with Keras 2.0 API

I’ve tried searching in the forums about anything related to the runtime of predicting, but i only found topics about the runtime of the fitting/training.

I’m curious to know, is this normal ? And in general, does the prediction time scale with the training time ?

Thanks for the awesome content!

hi!! thanks – this helped!

for the life of me I can’t find the label (id) anywhere. the order of the results keep on changing so I can’t go on that either…!!

Any help – helps!

thx – jon

I found what was causing the enormous prediction time, i forgot to change

self.model.predict_generator(test_batches, test_batches.sample)

to

self.model.predict_generator(test_batches, math.ceil(test_batches.samples/test_batches.batch_size))

It’s really important to do math.ceil. Otherwise, integer division will cause the prediction to skip a few images in the last batch

When I try to unzip the dataset “dogscats.zip” downloaded from http://files.fast.ai/data/ I get the following error message:

End-of-central-directory signature not found. Either this file is not latter case the central directory and zipfile comment will be found on the last disk(s) of this archive. unzip: cannot find zipfile directory in one of dogscats.zip or dogscats.zip.zip, and cannot find dogscats.zip.ZIP, period.

It appears to be corrupt. I’ve installed unzip succesfully and I’m in the right directory where the zip-file is located. How can I solve this?

Thank you

Something went wrong during the download, I’ve re-downloaded the data and now it works. :slight_smile: