Python and keras questions and tips

@jeremy - true :frowning: It is actually part of a project task in one of my courses at University.
The task is to use the system that works well on known (field1, field2) ratings, train the model and then test it on the unknown ratings.
On the crosstab, Alternating Optimization needs to be applied using Latent Factor Models.
Later, to evaluate the model, RMSE(Root Mean Square Error) will be computed on the test set.

You don’t want to create a crosstab to create a latent factor model. Take a look at our lesson4 notebook - the only time we create a crosstab is the show a demo in Excel; the actual keras model uses the raw ratings table.

did anyone have problems saving their model with keras? I’m getting an error where the save function can’t seem to grab the learning rate despite being very explicit about setting it.

I can save the model architecture as a json and the weights separately, but i’d like to keep the model’s optimizer state so I can shut down the server and start again. Even if the function isn’t getting the learning rate I set, there is a default value so I’m wondering why get_config() is raising an exception. Thanks for any help!

Great question. The officially correct way to set the learning rate is:

model.optimizer.lr.set_value(0.0001)

whereas in my code I’ve tended to use:

model.optimizer.lr = 0.0001

If you use my approach, you can’t save the model using save(), although you can still save_weights(). Since I only use the latter, I’d never noticed this problem before. Sorry about that! You might want to stick to using set_value() when setting the learning rate :slight_smile:

1 Like

Ah! thanks for the insight, @jeremy! I thought there was bug in keras or I was going a little nuts.

This issue has already sent me a little nuts - see Different training accuracy using model.optimizer.lr: .set_value vs. = :wink:

I did read that forum topic in regards as part of troubleshooting; I thought the final verdict was that how one set the learning rate was a matter of preference. But now I guess we know a little better. :slight_smile: thanks for your patience!

hmm, I thought I followed the pattern in the keras docs and in the forum that discussed how to set learning rates.

model_2.optimizer returns “Adam” object (which ‘has no attribute set_value’)
model_2.optimizer.lr return a float object

a little confuzzled on how set_value() works, so I can save my working model.

My guess is that at some point earlier in the session you used lr= rather than lr.set_value(). Once you do that, set_value won’t work again.

1 Like

I did indeed. i’ll rewrite those sections and try again. thanks so much!

@jeremy I was going through your statefarm-sample notebook (after I tried my own…) and have several questions.

  1. It gave me an error when I did
    "model.compile(Adam(), loss=‘categorical_crossentropy’, metrics=[‘accuracy’])"
    telling me that Adam is not defined. I went to the Keras website and corrected it to
    "model.compile(optimizer=‘adam’, loss=‘categorical_crossentropy’, metrics=[‘accuracy’])"
    and it was fine. I am simply wondering why your command didn’t work for me in case there is something interesting going on.

  2. What is the difference between model.fit and model.fit_generator (the latter was used in your statefarm notebook)? I see keras does not have the function fit_generator, so was fit_generator defined in util which we imported at the very beginning?

  3. I would also like to know where the function get_batches from? Is it from util class as well?

  4. Why is that in the validation batch, batch_size is twice as much?

Thank you and @rachel for all the good work. Merry Christmas!

It’s imported by utils.py. So you’ll need to have imported that.

See the keras docs for details. In short, fit_generator takes a generator (e.g from get_batches()) as input, whereas fit() takes an array. fit_generator is used for data augmentation.

Yes - try searching for ‘get_batches’ in utils.py to see this.

Because validation doesn’t require back propagation, so can generally handle larger batch sizes in the same memory.

1 Like

I’m trying to understand the Vgg16 model from lesson 1. In order to do so I edited the given code a little.

The original script plots the cats and dogs images with the class labels by using ‘plots’ from ‘utils’.

I tried to port that code snipped to matplotlib, but the image looks solarized. For example if I got two images loaded in a numpy array it has dimensions (2, 3, 300, 300).

Now, if I want to output an image with ‘matplotlib’ using ‘imshow’ I have to reshape the array first or else I get an error that the dimensions are wrong.

plt.imshow(np.rollaxis(imgs[0], 0, 3))
plt.show()

The problem with the code above is that the image looks solarized. How can I fix this?

@qwerty We have converted from RGB to BGR (see the vgg_preprocess function inside vgg.py), because the pretrained weights we are using are from a network that had BGR as the input format.

The plot method defined in utils.py switches it back to RGB.

My favourite Python library for plotting is Plotly.

Allows you create great interactive plots in various formats, extremely quickly and easily. They do offer cloud options for easier sharing, I find the iPython notebook intergration brilliant.

ims=np.asarray(…)
if (ims.shape[-1] != 3):
ims = ims.transpose((0,2,3,1))

Hi Everyone,

Did anyone run into this error with initializations

ImportError: cannot import name 'initializations'

 ---> 21 from keras import initializations
     22 from keras.applications.resnet50 import ResNet50, decode_predictions, conv_block, identity_block
     23 from keras.applications.vgg16 import VGG16

I can’t find initializations in keras documentation either. I see initializers but not initializations.

Is it possible to copy models or layers in Keras. For example in lesson 3 instead of defining a completely new model with the batchnorm layers included it would be nice if we could just do:

 bn = []
for layer in model.layers:
if type(layer) is Dropout:
    bn.append(Dropout(.5))
    bn.append(BatchNormalization())
else:
    bn.append(layer)
model = Sequential(bn)

If I do this then the model.summary() has extra connected_to layers. For example:

convolution2d_209 (Convolution2D (None, 64, 224, 224)  1792        zeropadding2d_209[0][0]          
                                                                   zeropadding2d_209[1][0]  

I can fit the model like this but what are practical implications? Is it fitting what I want or adding in extra nodes?

You can use copy_layer, copy_layers, copy_weights, and copy_model from utils.py. See the source to see how they work.

Jeremy,

Does np_utils.to_categorical have the same result as OneHot Encoding?

I implemented DenseNet for a small data set that I have. In the reference implementation they used np_utils.to_categorical on CIFAR10 dataset to convert the labels to binary. I felt that with our OneHot encoding mechanism we achieved the same thing but would like to get expert opinion.

Its hard to tell since both return a 2 dimensional array with 1s and 0s.

Thanks
Garima