Does a maxpooling layer reduce the number of parameters in a network?

1.8k Views Asked by At

I have a simple network defined:

model = Sequential()
model.add(Conv1D(5, 3, activation='relu', input_shape=(10, 1),name="conv1",padding="same"))
model.add(MaxPooling1D())
model.add(Conv1D(5, 3, activation='relu',name="conv2",padding="same"))
model.add(MaxPooling1D())
model.add(Dense(1, activation='relu',name="dense1"))
model.compile(loss='mse', optimizer='rmsprop')

The shape of the layers is as follows:

conv1-(None, 10, 5)

max1-(None, 5, 5)

conv2-(None,5,5)

max2-(None,2,5)

dense1-(None,2,1)

The model has a total of 106 parameters, however if I remove max pooling layer then the model summary looks as follows:

conv1-(None, 10, 5) 

conv2-(None,10,5)

dense1-(None,10,1)

In both the cases total parameters remain 106, but why is it commonly written that the max-pooling layer reduces the number of parameters?

1

There are 1 best solutions below

0
On

Which kind of network? It's all up to you.

  • Conv layers: no
  • Dense layers:
    • Directly after Conv or Pooling:
      • With "channels_last": no
      • With "channels_first": yes
    • After Flatten layers: yes
    • After GlobalPooling layers: no

Your network: no.

Explanations

  • Poolings and GlobalPoolings change the image sizes, but don't change the number of channels
  • Conv layers are fixed size filters that stride along the images.The filter size is independent of the image size, thus there is no change. Filters depend on kernel size and channels
  • Dense layers work on the last dimension only.
    • If the last dimension is channels, the pooling layers don't affect it
    • If the last dimension is an image side, it's affected
  • Flatten layers transform the image sizes and channels into a single dimension.