Setting up a LearningRateScheduler in Keras

2020-02-28 06:04发布

I'm setting up a Learning Rate Scheduler in Keras, using history loss as an updater to self.model.optimizer.lr, but the value on self.model.optimizer.lr does not get inserted in the SGD optimizer and the optimizer is using the dafault learning rate. The code is:

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.wrappers.scikit_learn import KerasRegressor
from sklearn.preprocessing import StandardScaler

class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.losses = []
        self.model.optimizer.lr=3
    def on_batch_end(self, batch, logs={}):
        self.losses.append(logs.get('loss'))
        self.model.optimizer.lr=lr-10000*self.losses[-1]

def base_model():
    model=Sequential()
    model.add(Dense(4, input_dim=2, init='uniform'))
    model.add(Dense(1, init='uniform'))
    sgd = SGD(decay=2e-5, momentum=0.9, nesterov=True)


model.compile(loss='mean_squared_error',optimizer=sgd,metrics['mean_absolute_error'])
    return model

history=LossHistory()

estimator = KerasRegressor(build_fn=base_model,nb_epoch=10,batch_size=16,verbose=2,callbacks=[history])

estimator.fit(X_train,y_train,callbacks=[history])

res = estimator.predict(X_test)

Everything works fine using Keras as a regressor for continuous variables, But I want to reach a smaller derivative by updating the optimizer learning rate.

3条回答
做自己的国王
2楼-- · 2020-02-28 06:34

Thanks, I found an alternative solution, as I'm not using GPU:

from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.optimizers import SGD
from keras.callbacks import LearningRateScheduler

sd=[]
class LossHistory(keras.callbacks.Callback):
    def on_train_begin(self, logs={}):
        self.losses = [1,1]

    def on_epoch_end(self, batch, logs={}):
        self.losses.append(logs.get('loss'))
        sd.append(step_decay(len(self.losses)))
        print('lr:', step_decay(len(self.losses)))

epochs = 50
learning_rate = 0.1
decay_rate = 5e-6
momentum = 0.9

model=Sequential()
model.add(Dense(4, input_dim=2, init='uniform'))
model.add(Dense(1, init='uniform'))
sgd = SGD(lr=learning_rate,momentum=momentum, decay=decay_rate, nesterov=False)
model.compile(loss='mean_squared_error',optimizer=sgd,metrics=['mean_absolute_error'])

def step_decay(losses):
    if float(2*np.sqrt(np.array(history.losses[-1])))<0.3:
        lrate=0.01*1/(1+0.1*len(history.losses))
        momentum=0.8
        decay_rate=2e-6
        return lrate
    else:
        lrate=0.1
        return lrate
history=LossHistory()
lrate=LearningRateScheduler(step_decay)

model.fit(X_train,y_train,nb_epoch=epochs,callbacks=[history,lrate],verbose=2)
model.predict(X_test)

The output is (lr is learning rate):

Epoch 41/50
lr: 0.0018867924528301887
0s - loss: 0.0126 - mean_absolute_error: 0.0785
Epoch 42/50
lr: 0.0018518518518518517
0s - loss: 0.0125 - mean_absolute_error: 0.0780
Epoch 43/50
lr: 0.0018181818181818182
0s - loss: 0.0125 - mean_absolute_error: 0.0775
Epoch 44/50
lr: 0.0017857142857142857
0s - loss: 0.0126 - mean_absolute_error: 0.0785
Epoch 45/50
lr: 0.0017543859649122807
0s - loss: 0.0126 - mean_absolute_error: 0.0773

And this is what happens to Learning Rate over the epochs: Learning Rate Scheduler

查看更多
够拽才男人
3楼-- · 2020-02-28 06:41

The learning rate is a variable on the computing device, e.g. a GPU if you are using GPU computation. That means that you have to use K.set_value, with K being keras.backend. For example:

import keras.backend as K
K.set_value(opt.lr, 0.01)

or in your example

K.set_value(self.model.optimizer.lr, lr-10000*self.losses[-1])
查看更多
神经病院院长
4楼-- · 2020-02-28 06:51
keras.callbacks.LearningRateScheduler(schedule, verbose=0)

In new Keras API you can use more general version of schedule function which takes two arguments epoch and lr.

From docs:

schedule: a function that takes an epoch index as input (integer, indexed from 0) and current learning rate and returns a new learning rate as output (float).

From sources:

    try:  # new API
        lr = self.schedule(epoch, lr)
    except TypeError:  # old API for backward compatibility
        lr = self.schedule(epoch)
    if not isinstance(lr, (float, np.float32, np.float64)):
        raise ValueError('The output of the "schedule" function '
                         'should be float.')

So your function could be:

def lr_scheduler(epoch, lr):
    decay_rate = 0.1
    decay_step = 90
    if epoch % decay_step == 0 and epoch:
        return lr * decay_rate
    return lr

callbacks = [
    keras.callbacks.LearningRateScheduler(lr_scheduler, verbose=1)
]

model.fit(callbacks=callbacks, ... )
查看更多
登录 后发表回答