[SOLVED] Can't reproduce StateFarm Sample notebook results

This is a strange occurance I get on the StateFarm sample.
Since the learning rate starts off low, and after a couple of epochs, is set high again, I decided to use Keras’ LearningRateScheduler to automate it as follows:

def fit_model(model, epochs=1, lr=1e-3):

    callbacks = []
    if type(lr) is list:
        lr_schedule = np.hstack([np.repeat(lr[i], epochs[i])
                                 for i in range(len(lr))])

        callbacks.append(LearningRateScheduler(lambda epoch: lr_schedule[epoch]))
    else:
        model.optimizer.lr = lr

    return model.fit_generator(train_batches, train_batches.num_batches,
                               epochs=np.sum(epochs), verbose=0,
                               callbacks=callbacks, validation_data=valid_batches,
                               validation_steps=valid_batches.num_batches)

Now, if I do it the normal way (like it was done in class), in the simple linear model,

fit_model(model, 2, 1e-5)
fit_model(model, 4, 1e-3)

I get more or less what is expected:


Now, interestingly, if I use the LearningRateScheduler,

fit_model(model, [2, 4], [1e-5, 1e-3])

I get something wierd!


The accuracy goes up, like previously for the low learning rate (epochs 0 and 1).
As soon as the Scheduler changes the rate to 1e-3, the accuracy inexplicably dips.

It’s as if the model resets itself when the learning rate changes.

But, as I mentioned, if I do the same thing manually, I get the expected behavior.

Any ideas would be greatly appreciated.

(P.S. I’m using Keras 2.0.8 with TF 1.3.0 backend.)

What happens if you do fit_model(model, [6], [1e-5]) ?

I get more or less what is expected.

The graph is nearly identical to the first one you showed. Is the initial learning rate (when compiling the model) 1e-5?

I think the problem is with this line:

You should use the keras backend command set_value to set the lr value.

Thanks @msp :smile:

That seems to have solved the problem.

For those who have trouble with the StateFarm sample notebook (or reproducing the results from class), here’s what I learnt:

  • Set the learning rate with the set_value command from keras.backend. Not directly as done in the lesson.
  • The learning rate schedule from class doesn’t seem to be working (at least for the simple linear model). I got good results by using learning rate 1e-5 for 2 epochs and then 1e-4 (not 1e-3) for 4 epochs.

Since Keras’ LearningRateScheduler works as expected, I’m changing the title for everyone else.

(Previous thread title: Keras LearningRateScheduler acting up)