Can I use Python 3 for this course?

I’m using python 3.5 (Spyder, Anaconda), theano 0.9.0, and keras 1.2.2.
I’m getting an error when trying to access the attributes ‘N’ and ‘nb_sample’ of the batches. Specifically, when I try to run the following command in mnist.py.

lm.fit_generator( batches, samples_per_epoch = batches.N, nb_epoch=1,
                  validation_data = test_batches, nb_val_samples = test_batches.N )

The error is:

lm.fit_generator( batches, samples_per_epoch = batches.N, nb_epoch=1,
                  validation_data = test_batches, nb_val_samples = test_batches.N )
Traceback (most recent call last):
  File "<ipython-input-315-9849b28d8dc6>", line 1, in <module>
    lm.fit_generator( batches, samples_per_epoch = batches.N, nb_epoch=1,
AttributeError: 'NumpyArrayIterator' object has no attribute 'N'

Any idea what might be the problem?
Also, where can I find all the attributes available for batches?

it is now n.

in your notebook try typing a dot then tab at the end of batches.

4 Likes

It works. Just like that :slight_smile: Thanks!!

Yup, I am running part 1 with the latest and greatest Anaconda 3 distribution (Python 3.6, cudnn 8, etc) and Keras 2.

You have to make some changes to the Python code, but it’s fairly painless. Most of the headaches are in Keras 2 where they changed tha API without worrying too much about backward compatibility. Instead of # of samples, various functions require the number of batch steps now - other than that, pretty straightforward.

1 Like

thanks, @kzuiderveld. I have started the conversion: https://github.com/chanansh/course.fast.ai-pyhon-3-keras-2 . If you are interested to contribute, let me know.

wondering how you got py3/keras 2 to work? i’ve been fiddling with this for a while and did some pretty extensive testing. i basically rewrote the convnet from parts 1-3 but line by line, no matter what i do trying to optimize the last layer gives me ETAs of 50 000 seconds. i’ve even tested it on a different system with a similar GPU. all the python 2-keras1 versions take 700 seconds to optimize python 3-keras2 seems to be broken somehow taking huge amounts of time to optimize. i can only assume it is not plugging into the GPU (despite theano telling me it sees the GPU when i import keras). i’ve even fiddled with switching to a tensorflow backend, not much luck.

i made a repo with configs and my code is in vgg_yvan.

Quickly looked at your code, this is incorrect:
model.fit_generator(tr_batches,
steps_per_epoch=28336,
validation_data=va_batches,
validation_steps=2000,
epochs=1)

With 28336 training images and batch size of 64, steps_per_epoch should be ceil(28336/64) = 443. Likewise, validation_steps = ceil(2000/64)=32.

Try it out, let us know whether that solves the problem.

thanks for responding so fast. i switched it to batches.n//batch_size that lowers the training time to the expected ~ 700 seconds. i set it as 28336 when i saw that in python2-keras1 jeremy’s epoch’s had 23K per epoch (the total number of training images). i went on the docs and read it it should # examples / batch size (which makes sense). i guess my next question is why do jeremy’s fit operations have epochs that look like so:
23000/23000 [==============================] - 272s - loss: 0.4665 - acc: 0.9703 - val_loss: 0.4522 - val_acc: 0.9710

it would be really nice to understand.

edit: does keras 1 show the # of training examples every step and keras 2 show the number of steps? then in keras 1 you would see all 23K (360 * 64) examples every epoch, and in keras 2 you just see each step so (360) and you then know you have iterated over all 23K examples (360*64), if batch size is 64. am i moving towards something correct here?

edit2: would this also explain why i was always ending up at ~40K seconds of train time? i guess that when you give keras more batches than you have (say 23K) it just starts cycling back through them? so that would explain why i had 700seconds*64 = 44.8K seconds of optimize time, it is as if i was trying to train roughly 64 epochs worth of time.

The API of Keras 2 is simply different - it reports the number of batches (i.e. “steps”) that have been processed per epoch. If one is aware of this change, the transition to Keras 2 is smooth.

As you’re using fit_generator, try specifying workers=4 in that call and see whether multithreading will boost performance.

1 Like

ill the multi threading a give it a go. thanks for you help.

Top tip: according to Keras Documentation (as of June 2017)

Keras is compatible with: Python 2.7-3.5.

i.e. If you accidentally use Python 3.6 (like myself) to begin with, Keras might (will) break at some point in vgg16.py. I’ve just tried Python 3.5 and it seems to be working ok. (top tip 2: experiment with different conda environments)

Just an FYI, Keras now claims compatibility to 3.6.

Times, they are a changing…

1 Like