Tensorflow Loss gets Nan because of Lp-Norm as Custom Layer

111 Views Asked by At

Tensorflow Loss gets Nan because of Lp-Norm as Custom Layer using following Code:

class CLayerPowerDensity(Layer):

    def __init__(self, **kwargs):
        super(CLayerPowerDensity, self).__init__(**kwargs)

    def build(self, input_shape):
        self.lp = self.add_weight(name='lp_norm',
                                 shape=(1,),
                                 initializer='ones',
                                 trainable=False)
        super(CLayerPowerDensity, self).build(input_shape)

    def call(self, inputs):
        p = (self.lp**2) + 1 

        return K.pow(K.pow(K.abs(inputs[0]), p) + (K.pow(K.abs(inputs[1]), p)), (1/p))

Does anybody knows why i get nan in Loss? and how to solve the problem....

if i use return K.sqrt(((inputs[0] ** 2) + (inputs[1] ** 2)))everything works fine. Even with the above return-value. But if i turn the weight "lp-norm" to true, my loss gets nan

0

There are 0 best solutions below