I am designing Generative Adversarial network (GAN) for denovo antibody sequence generation

36 Views Asked by At

I am currently working on GAN for antibody design.

The problem is, generator accuracy is not improving while discriminator seems to perform well. The results are attached. How to improve it?

I have over 1 million heavy and light chain sequences. I have encoded the sequences from 1-20 based on the amino acid sequences.

In the generator I have used the combination of LSTM and GRU layers followed by batch normalization.

In the discriminator I have used the combination of dense layers followed by LeakyReLU activation and dropout.

The problem is, generator accuracy is not improving while discriminator seems to perform well. The results are attached. How to improve it?

Please guide me how to do it.

I have tried changing the following:

  • Learning rate
  • Activation functions
  • Data batch
  • Epochs.

But the results are different and generator loss does not come down.

description of result1

description of result2

1

There are 1 best solutions below

1
rob0tst0p On

When training a GAN, the loss does not mean much in terms of how the network is actually performing. It is normal for the generator and discriminator losses to grow or shrink in different directions. What you want to look for is that the losses eventually converge to some value. If you suspect that the losses converge too quickly, perhaps you are stuck in a local minima and parameters can be adjusted accordingly to fix the issue.

Looking at the loss plots you provided, there does not seem to be anything wrong with your network based just on the loss. How do the outputs of the generator look? Are they consistent with what you would expect?