Loss stops to decrease in self-supervised learning

167 Views Asked by At

I am using pytorch implementation of SIMCLR, training it on my own dataset.

The problem is, after 100 epoch, the loss dropped from 5.6 to 5.0, and it cease to decrease anymore. I wonder what might be the problem of it, or is it a normal situation?

The learning rate i set is is 0.2 and I wrap the optimizer with LARS with eeta=0.001, and the batch size is 512. (around 200000 small image with resnet18)

As you could see, the learning rate is rather moderate, right (not inappropriately large)?. So what do you think might be the problem?

2

There are 2 best solutions below

2
On

the learning_rate is too high I think, check the loss function and the codes。

0
On

Here is a good method how you can tune the learning rate (eventually you'll be able to do it more intuitively): link

I'm summarizing the key steps to tuning the learning rate programmatically here:

  • Create an array of learning rates value you want to try. They should include orders of magnitudes, e.g. np.logspace(-9, -1, 21)
  • Loop through the learning rate and train your model for a fixed (low) number of epochs. The choice depends on the calculation time per epoch. Save the validation loss after each run and plot the result.
  • If you change the architecture significantly, you should tune the learning