Is the learning rate changing when we change in dogs_cats_redux?

From the code in Vgg16.py,
def compile(self, lr=0.001):
self.model.compile(optimizer= Adam(lr=lr),
loss=‘categorical_crossentropy’, metrics=[‘accuracy’])

And when we are trying to change the learning rate we do, vgg.model.optimizer.lr = 0.001

So I would like to know how is vgg.model.optimizer.lr accessing the lr in the compile method?

the learning rate is being used by Adam( a stochastic gradient descent algorithm). Take a look at http://sebastianruder.com/optimizing-gradient-descent/ for the overview of different algorithms and feel free to ask any question.

2 Likes

Yes, I do understand that. So when we are doing a vgg.fit(), I see there is no argument for the learning rate there.
The only method I found which has lr is the compile method. So hence the question.

So you mean to say that when we do vgg.model.optimizer.lr = 0.001 it is hitting some other internal function and not the compile. That is they are independent?

Its just statically setting the learning rate, although there are methods applied for the model to learn a learning rate for itself(using Artificial Feedforward Neural Networks https://pdfs.semanticscholar.org/ad1f/e0c16c3b167e7adb6c71e165cd5edefeaea1.pdf), for now we are setting it explicitly.
Also you can check this http://datascience.stackexchange.com/questions/410/choosing-a-learning-rate It offers good insight as to choosing a learning rate.

You can also programmatically decrease the learning rate in Keras using a callback.

I’m not sure how easy it would be to do this with the course scripts, but if you call fit_generator on a Keras model there is a kwarg called “callbacks” you can use.

keras.callbacks.ReduceLROnPlateau checks your val_loss and reduces your learning rate by an amount you set at plateaus.

There’s also an EarlyStopping option, plus you can always make your own.

See:

https://keras.io/callbacks/

Dont be afraid to edit and update the traditional Vgg16.py implementation. have fun and experiment!. Another oprtion to do is to put a loop around the fitting, instead of epoch, this is described in the dogs_cats_redux.ipynb solution by Jeremy, there you can specify the rate of change for the learning rate.

cool, you can try it out, i am exited to see what you come up with.

I’ve been following along with the course but not using the official course scripts so far (was already tinkering around with Keras before I came across this). From my understanding the VGG class is just a wrapper around a Keras model, so callbacks should work with it too.

I’ve been using EarlyStopping and ReduceLROnPlateau for most of my models (CSVLogger is helpful too). There’s even a Tensorboard callback if you’re using that backend.