I have a simple network defined:
model = Sequential()
model.add(Conv1D(5, 3, activation='relu', input_shape=(10, 1),name="conv1",padding="same"))
model.add(MaxPooling1D())
model.add(Conv1D(5, 3, activation='relu',name="conv2",padding="same"))
model.add(MaxPooling1D())
model.add(Dense(1, activation='relu',name="dense1"))
model.compile(loss='mse', optimizer='rmsprop')
The shape of the layers is as follows:
conv1-(None, 10, 5)
max1-(None, 5, 5)
conv2-(None,5,5)
max2-(None,2,5)
dense1-(None,2,1)
The model has a total of 106 parameters, however if I remove max pooling layer then the model summary looks as follows:
conv1-(None, 10, 5)
conv2-(None,10,5)
dense1-(None,10,1)
In both the cases total parameters remain 106, but why is it commonly written that the max-pooling layer reduces the number of parameters?
Which kind of network? It's all up to you.
Your network: no.
Explanations