Keras 2 / Python 3.6 notebooks

In case folks want to use Python 3 for this course, here are my versions of Jeremy’s notebooks (lessons 1-4) that I’m using:

I’d love to be able to use Tensorflow rather than Theano as the backend (so that I’m compatible with the part2 lessons), but could not figure out how to convert the pre-computed coefficients. I decided not to sweat it and stick to Theano for now.

Cheers, Karel


Thanks for posting this. For conversion this might help
titu1994 on GitHub
You need to place that file into the same dir as the weights files and edit. Follow the instructions inside. It is a bit of work, but functional.

1 Like

Cool. here is my version of lesson 1

Good job @kzuiderveld ! Thanks to share.

Hello, just in case anybody might be interested: here is a link to my Keras 2/ Python 3 notebooks for part 1. I noticed this thread only after posting the link on another one: Keras 2 Released . Any comments and suggestions will be greatly appreciated.

1 Like

Hey,thanks for this Karel. I used the code

from keras import backend

to make the code work with Tensorflow, however in both lesson 1 and 2, the code can run but doesn’t have the results in the course. In particular, the training accuracy of the Vgg model in lesson 1 is only 87%, with validation accuracy 92%. Is this because, even with the image data format change, the Vgg weight is still interpreted differently Tensorflow vs Theano?

Thanks a lot!

Sorry for the late reply, was out of town.

Theano and Tensorflow coefficients are different. I couldn’t figure out how to convert, so I stuck with Theano for part 1.

Thanks Robi. This was really helpful.

I just updated my repo which now includes also the “Python 3.5 - Keras 2” adaptation for Part 2 of the course. As indicated in the repo’s README, I am still in the process of completing some tasks. Thanks in advance to everybody who will have a chance to provide any comments, suggestions or corrections.
For any questions or issues related to the repo I suggest to directly visit the issues section of the repo itself at

Hi there - Thanks for sharing your work. I’ve been working in Python3 as well. I have a question about your hardware. What are you using to achieve these results? I ask because I’m running into an out of memory error when I try to run the fc_model. I’ve tried to run it with fit_generator but haven’t been able to get it to work yet. I’m sure the majority of the problem is my lack of understanding of how to properly create a generator, but I’m curious as to how you were able to proceed. Thanks!

I’m using an Nvidia GeForce Titan X on an HP Z840 workstation, with 128 GB DDR4 RAM and a XEON E5-2637 v3 6-core processor. My display is connected to a separate graphics card Nvidia Quadro K4200, so the Titan X is entirely dedicated to the DL tasks.
I have not yet thoroughly tested the fc_model yet, just tried to run it without analyzing the potential pitfalls; in particular, I was not yet able to run the larger size model because it seems it doesn’t fit on the Titan X as is (I get a memory error for the Titan X). I will update the repo as soon as I solve the issues, hopefully in a few nights …

1 Like

Thanks for the reply - I am immediately humbled by your hardcore machine! I’m over here with 16GB of ram, a core i3-7100, and a GTX 1060, 6GB. I can get some stuff done, but get bogged down at other times. I guess I just need to learn more of the batching techniques to train and run models. Thanks again, looking forward to seeing what you came up with.

I have finally updated the tiramisu-keras notebook in my repo today: in this version I go straight to the larger network in order to visit all the cells to the end, but the performance I get (val_accuracy) for the number of epochs the model ran is lower than I would have expected. Well, having good hardware doesn’t guarantee good results, we all know that! I guess I still have a lot to learn in order to improve it.

EDIT: I figured this one out it is due to keras.backend.image_dim_ordering is still set to ‘tf’ where it should be set to ‘th’. I have ‘th’ set in the .keras.json file. I am not sure why it is not picking that up. So I added an explicit set in the notebook.


Thanks, I am also trying to get python3 to work on windows. Have you run into this error when call the Vgg constructor?

~\AppData\Local\Continuum\Anaconda3\envs\dlwin36\lib\site-packages\keras\layers\ in compute_output_shape(self, input_shape)
479 raise ValueError('The shape of the input to “Flatten” '
480 'is not fully defined '
–> 481 '(got ’ + str(input_shape[1:]) + '. '
482 'Make sure to pass a complete “input_shape” '
483 'or “batch_input_shape” argument to the first ’

ValueError: The shape of the input to “Flatten” is not fully defined (got (0, 7, 512). Make sure to pass a complete “input_shape” or “batch_input_shape” argument to the first layer in your model

Any idea?


I got this issue when using the old version of the keras.json file for Theano with the latest version of Keras.
The new version of the template that I uploaded to my repo yesterday solved the issue for me.
In the new version there is a different keyword for specifying the dim ordering. For using Theano with the correct keras.json file then I suggest to try the new template.
Let me know if this helps.

@Robi I am working on seq2seq with attention and when I try to use your code for attention wrapper I get the error “LSTM doesn’t have attribute get_constants”

I think it is due to the fact that you are using the latest/a very recent Keras version.

Referring to the spelling_bee_RNN.ipynb and modules, I initially tested them with Keras 2.0.6 and they worked fine.

In later Keras releases the implementation in the module is different and, among other changes, get_constants has disappeared.

I recently tested the modules with Keras 2.1.2 and updated my repo, but could not devote time to solve this issue.

If anyone wants to contribute with a working version of the “seq2seq with attention” modules it will be great!

Dear @Robi thanks for this notebooks!!

I followed the instructions on this thread and could run your notebooks with no problem.

But, in lesson 1, I’m see some warnings and finally wrong predictions that I would like to show you so you can tell me what could be wrong.

First, in MODEL CREATION when I run:

model = VGG_16()

I get the following warning:

/home/edu/anaconda2/envs/deep_learning_1/lib/python3.6/site-packages/keras/layers/ UserWarning: `output_shape` argument not specified for layer lambda_1 and cannot be automatically inferred with the Theano backend. Defaulting to output shape `(None, 3, 224, 224)` (same as input shape). If the expected output shape is different, specify it via the `output_shape` argument.
  .format(, input_shape))

Then in the In [19] I run this code:

batches = get_batches('train', batch_size=batch_size)
val_batches = get_batches('valid', batch_size=batch_size)
imgs,labels = next(batches)

# This shows the 'ground truth'
plots(imgs, titles=labels)

and I get this warning:

/home/edu/anaconda2/envs/deep_learning_1/lib/python3.6/site-packages/matplotlib/ FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
  if s != self._text:

It will still show pictures of Dogs / Cats with 0/1 as labels.

Finally the code for predictions


Will produce the following results from two images from a DOG and a CAT:

Shape: (2, 1000)
First 5 classes: ['tench', 'goldfish', 'great_white_shark', 'tiger_shark', 'hammerhead']
First 5 probabilities: [0.0008 0.0004 0.0013 0.0005 0.0016]

Predictions prob/class: 

I can’t understand why it’s not working.

Thank you very much in advance!


Hello Edu,
I would check if your ~/.keras/keras.json file is properly set for Theano. There is an example in my repo and you can also find pointers in this thread.
(Unfortunately I do not have a proper environment to investigate your issue anymore.)

Hope it helps!