I have a Conditional Generative Adversarial Network for Quantum State Tomography. The metrics I am monitoring during the training process are the losses and the Fidelity (the degree of similarity between two matrices). Fidelity close to 1 is good and close to 0 is bad. My result is a Fidelity over 0.999 which is excellent but the losses of both Discriminator and Generator are opposite to what I would expect.
Fidelity is continuously raising which means that the matrices being generated by the network is becoming closer and closer to my target:
But the losses:
Why is that? Is something wrong or is this acceptable?
What I would expect is the loss of the Discriminator going up - it is becoming harder and harder to distinguish between real and fake data - and the loss of Generator going down. But the opposite happened. I would say "well, my cGAN is no good and I need to rework it" but the result is good! The Generator is able to generate a matrix that closely resemble the target even tough it is becoming worst during training.
Am I not getting something?
What loss function are you using for your GAN? I would say for non-saturating gan loss or wasserstein gan this is acceptable for the loss of the generator and discriminator. GAN loss during training on its own can be difficult to interpret as a performance metric, but typically you want the generators loss to increase and then converge and the discriminators loss to decrease and then converge. Here is a great article regarding GAN loss functions and their expected behaviors: https://machinelearningmastery.com/generative-adversarial-network-loss-functions/
In non saturating gan loss, the generator loss seeks to maximize (log(D(G(z)) (the probability that the generator is creating something that the discriminator believes is real).