ReduceLRonPlateau keeps decreasing LR across multiple models

161 Views Asked by At

I'm using ReduceLROnPlateau for multiple experiments, but I'm getting lower and lower initial learning rate for each conjsecutive modewl run.

from tensorflow.keras.callbacks import ReduceLROnPlateau


for model in models:
    reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=10)
    model.fit(dataset, epochs=10, validation_data=val_dataset, callbacks=reduce_lr)

The learning rate on the output log looks as follows:

Model #1
  Epoch 1  ... lr: 0.01
  ...
  Epoch 21 ... lr: 0.005

Model #2
  Epoch 1  ... lr: 0.005
  ...
  Epoch 25 ... lr: 0.001

and so on. (don't look at the numbers I've siplified the output)

How do I tell the model or to the callback to start from the same learning rate each time?

0

There are 0 best solutions below