deep neural network model stops learning after one epoch

2.9k Views Asked by At

I am training a unsupervised NN model and for some reason, after exactly one epoch (80 steps), model stops learning. enter image description here] Do you have any idea why it might happen and what should I do to prevent it?

This is more info about my NN: I have a deep NN that tries to solve an optimization problem. My loss function is customized and it is my objective function in the optimization problem. So if my optimization problems is min f(x) ==> loss, now in my DNN loss = f(x). I have 64 input, 64 output, 3 layers in between :

self.l1 = nn.Linear(input_size, hidden_size)
self.relu1 = nn.LeakyReLU()
self.BN1 = nn.BatchNorm1d(hidden_size)

and last layer is:

self.l5 = nn.Linear(hidden_size, output_size)
self.tan5 = nn.Tanh()
self.BN5 = nn.BatchNorm1d(output_size)

to scale my network. with more layers and nodes(doubles: 8 layers each 200 nodes), I can get a little more progress toward lower error, but again after 100 steps training error becomes flat!

enter image description here

0

There are 0 best solutions below