I am trying to re-implement one paper, which suggests to adjust the learning rate as below:
The learning rate is decreased by a factor of the regression value with patience epochs 10 on the change value of 0.0001.
Should I use the torch.optim.lr_scheduler.ReduceLROnPlateau()?
I am not sure what value should I pass to each parameter.
Is the change value in the statement denotes to the parameter threshold?
Is the factor in the statement denotes to the parameter factor?
torch.optim.lr_scheduler.ReduceLROnPlateauis indeed what you are looking for. I summarized all of the important stuff for you.mode=min: lr will be reduced when the quantity monitored has stopped decreasingfactor: factor by which the learning rate will be reducedpatience: number of epochs with no improvement after which learning rate will be reducedthreshold: threshold for measuring the new optimum, to only focus on significant changes (change value). Say we havethreshold=0.0001, if loss is 18.0 on epoch n and loss is 17.9999 on epoch n+1 then we have met our criteria to multiply the current learning rate by thefactor.You can check more details in the documentation: https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler.ReduceLROnPlateau