Balance GAN training by waiting for Generator to catch up?

28 Views Asked by At

I encountered the following discriminator loss curve (x axis times 1k is the number of images trained on).

enter image description here

I understand that ideally the discriminator loss should oscillate around 0.5. Does it make sense to give the Generator more training steps in order to catch up hence bringing the discriminator loss back to 0.5 before it goes down to zero? I have never read about people doing so programmatically, which seems to be rather easy to implement.

# pseudo code
while True:
  loss = train_D()
  if loss < 0.5:
    loss = train_D()
  else:
    train_G()
    loss = eval_D()

Is this sensible to implement?

0

There are 0 best solutions below