Strange behavior of a frozen inceptionV3 net in Keras

148 Views Asked by At

I am loading the inceptionV3 keras net with a tensorflow backend. After loading saved weights and setting the trainable flag of all the layers to false I try to fit the model and expect to see everything stable. But the validation loss increase (and accuracy decrease) with each epoch, while the training loss and accuracy are indeed stable as expected.

Can someone explain this strange behavior ? I susupext it is related to the batch normalization layers.

1

There are 1 best solutions below

0
On

I had the same problem and looks like I found the solution. Check it out here