from tensorflow.keras.applications import VGG16
pre_trained_model = VGG16(weights='imagenet', include_top=False, input_shape=(224, 224, 3))
model = Sequential()
model.add(pre_trained_model)
model.add(GlobalAveragePooling2D())
model.add(Flatten())
model.add(Dense(512, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.summary()
The total no. of parameters in VGG16 is 138 million. However, on checking the no. of parameters, it gives 14,977,857 only. Can anyone explain why is there a difference in the no. of total parameters. Even if I check the total no. of parameters in pre_trained_model, it is also not equal to 138 million.
You have
include_top=False
parameter set which drops top FC layers of VGG16. If you setinclude_top=True
and checkpre_trained_model.summary()
, you will see these lines at the bottom:And now you have the desired 138M parameters.
The lesson learned here: the majority of the parameters in this NN actually comes from FC layers. By the way, this fact once again demonstrates the lightness of convolutional layers in comparison with FC ones.