Keras: regularizing loss for an output based on the other outputs

23 Views Asked by At

Setup

I have a model with 3 inputs and 2 outputs (figure below). I have a defined loss per each output, but then I want to add a regularization term to each loss which is a function of two outputs:

L_V = MSE(v,y_v) + lambda_ * f(v, q)
L_Q = MSE(q,y_q) + lambda_ * f(v, q)

the regularizer f(v, q) is like an additional restriction, e.g. let's say I want to solve a trade-off problem of fitting Q and V, but also minimizing the dot product of v.q.

network

Question

Without regularizer, I can pass my two losses in model.compile(loss = [v_loss, q_loss]). But how can I define the regularizer? My main challenge is how to read the value of other output in the custom v_loss function, to evaluate f(v,q) on that.

What I tried and failed

I concatenated V and Q in a single output, and returned a loss of L_v + L_q + L_regu. but the network doesn't learn anything even for the simplest linear data with enough iteration. I think the main problem is that the Q network is trained also by L_v and likewise, V network is trained also by L_q, which is wrong.

0

There are 0 best solutions below