Keras custom lambda layer: how to normalize / scale the output

1.8k Views Asked by At

I am struggling with scaling the output of a lambda layer. The code is as follows: My X_train is 100*15*24 and Y_train is 100*1 (the network consists with a LSTM layer + Dense layer)

input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1) 
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)


## custom lambda layer/ loss function ##
def custom_layer(new_input):

    add_input = new_input[0]+new_input[1]
    
    #below three lines are where problem occurs that makes the program does not work
    ###############################################
    scaler = MinMaxScaler()
    scaler.fit(add_input)
    normalized = scaler.transform(add_input)
    ###############################################
    return normalized

lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])

model = Model(inputs=data_input, outputs=lambda_layer) 
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
model.fit(X_train, Y_train, epochs=2, batch_size=216)

How can I normalize the output of lambda_layer properly? Any ideas or suggestions are appreciated!

1

There are 1 best solutions below

3
On

I don't think Scikit transformers would work in Lambda layers. If you're only interested in getting the normalized output w.r.t the data passed in, you can do,

from tensorflow.keras.layers import Input, LSTM, Dense, Lambda
from tensorflow.keras.models import Model
import tensorflow as tf


timesteps = 3
num_feat = 12
input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1) 
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)


## custom lambda layer/ loss function ##
def custom_layer(new_input):

    add_input = new_input[0]+new_input[1]
    
    normalized = (add_input - tf.reduce_min(add_input, axis=0, keepdims=True))/(tf.reduce_max(add_input, axis=0, keepdims=True) - tf.reduce_max(add_input, axis=0, keepdims=True))
    
    return normalized

lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])

model = Model(inputs=data_input, outputs=lambda_layer) 
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])