My goal is to have a look at the learning rate progression of the Adam optimizer, on which I apply InverseTimeDecay schedule. So I want to check if the learning rate actually decreases.
Having checked this question on stack overflow, I made similar changes in my code:
- Added this in my callback function,
tf.keras.callbacks.LearningRateScheduler(hparams[HP_LEARNING_RATE])
- Added this function call based on the similar posted question:
def get_lr_metric(optimizer):
def lr(y_true, y_pred):
return optimizer.lr
return lr
- Added also the following call in the model.compile method,
lr_metric = [get_lr_metric(optimizer)]
model.compile(optimizer=optimizer,
loss=neural_network_parameters['model_loss'],
metrics=neural_network_parameters['model_metric'] + lr_metric, ) #+ lr_metric
However, when I start the training of the model I get the following error:
TypeError: float() argument must be a string or a number, not 'InverseTimeDecay'
TypeError: 'float' object is not callable
Kindly check my colab notebook and please comment on it any changes that I should do. Also, write in the comments any additional information that I might forget to mention.
[UDPATE] - I guess that my problem is the type of optimizer.lr value. Which in my case is an InverseTimeDecay object. How can I change a type of that object to a float number? InverseTimeDecay to float.
InverseTimeDecay
and everyLearningRateSchedule
instances are functions that accept a step and return the learning rate.So the learning rate is completly predictable from the iterration/steps and there is no real need to monitor it using something like tensorboard, but if you really want you can use something like the following: