I have the following network definition for siamese networks:
def build_siamese_model(inputShape, embeddingDim=48):
# specify the inputs for the feature extractor network
inputs = Input(inputShape)
## first set of CONV => RELU => RESID=> POOL => DROPOUT layers
first_conv1 = Conv2D(32, (3, 3), padding="same")(inputs)
first_batch_norm1=BatchNormalization()(first_conv1)
first_act1= LeakyReLU()(first_batch_norm1)
second_conv1 = Conv2D(32, (5, 5), padding="same")(inputs)
second_batch_norm1=BatchNormalization()(second_conv1)
second_act1= LeakyReLU()(second_batch_norm1)
third_conv1 = Conv2D(32, (7, 7), padding="same")(inputs)
third_batch_norm1=BatchNormalization()(third_conv1)
third_act1= LeakyReLU()(third_batch_norm1)
residual_block1= Add()([first_act1, second_act1, third_act1])
pool1 = MaxPooling2D(pool_size=(2, 2))(residual_block1)
dropout1 = Dropout(0.3)(pool1)
#receiver Convolutional layer
receiver1_conv = Conv2D(32, (3, 3), padding="same")(dropout1)
receiver1_batch_norm=BatchNormalization()(receiver1_conv)
act_receiver1=LeakyReLU()(receiver1_batch_norm)
## second set of CONV => BN=> RELU => RESID=> POOL => DROPOUT layers
first_conv2 = Conv2D(32, (3, 3), padding="same")(act_receiver1)
first_batch_norm2=BatchNormalization()(first_conv2)
first_act2= LeakyReLU()(first_batch_norm2)
second_conv2 = Conv2D(32, (5, 5), padding="same")(act_receiver1)
second_batch_norm2=BatchNormalization()(second_conv2)
second_act2= LeakyReLU()(second_batch_norm2)
third_conv2 = Conv2D(32, (7, 7), padding="same")(act_receiver1)
third_batch_norm2=BatchNormalization()(third_conv2)
third_act2= LeakyReLU()(third_batch_norm2)
residual_block2= Add()([first_act2, second_act2, third_act2])
pool2 = MaxPooling2D(pool_size=(2, 2))(residual_block2)
dropout2 = Dropout(0.3)(pool2)
#receiver Convolutional layer
receiver2_conv = Conv2D(32, (3, 3), padding="same")(dropout2)
receiver2_batch_norm=BatchNormalization()(receiver2_conv)
act_receiver2=LeakyReLU()(receiver2_batch_norm)
## last set of CONV => BN=> RELU => RESID=> POOL => DROPOUT layers
first_conv3 = Conv2D(32, (3, 3), padding="same")(act_receiver2)
first_batch_norm3=BatchNormalization()(first_conv3)
first_act3= LeakyReLU()(first_batch_norm3)
second_conv3 = Conv2D(32, (5, 5), padding="same")(act_receiver2)
second_batch_norm3=BatchNormalization()(second_conv3)
second_act3= LeakyReLU()(second_batch_norm3)
third_conv3 = Conv2D(32, (7, 7), padding="same")(act_receiver2)
third_batch_norm3=BatchNormalization()(third_conv3)
third_act3= LeakyReLU()(third_batch_norm3)
residual_block3= Add()([first_act3, second_act3, third_act3])
pool3 = MaxPooling2D(pool_size=(2, 2))(residual_block3)
dropout3 = Dropout(0.3)(pool3)
#last receiver Convolutional layer
receiver3_conv = Conv2D(32, (3, 3), padding="same")(dropout3)
receiver3_batch_norm=BatchNormalization()(receiver3_conv)
act_receiver3=LeakyReLU()(receiver3_batch_norm)
# prepare the final outputs
pooledOutput = GlobalAveragePooling2D()(act_receiver3)
outputs = Dense(embeddingDim)(pooledOutput)
# build the model
model = Model(inputs, outputs)
return(model)
However, this part is connected to the input and output of my network as a functional API. Here is how I link these parts:
print("[INFO] building siamese network...")
imgA = Input(shape=config.IMG_SHAPE)
imgB = Input(shape=config.IMG_SHAPE)
featureExtractor = build_siamese_model(config.IMG_SHAPE)
featsA = featureExtractor(imgA)
featsB = featureExtractor(imgB)
distance = Lambda(utils.euclidean_distance)([featsA, featsB])
outputs = Dense(1, activation="sigmoid")(distance)
model = Model(inputs=[imgA, imgB], outputs=outputs)
However, when the model is compiled, here is the summary of the model:
It seems therefore that my network definition done above is seen as just one layer of the network.
So, what do I want?
I would like to load the model and extract the output of a specific layer. In special, I would like the output of the last layer of the functional object (outputs = Dense(48)(pooledOutput) in the network definition above). This will give me a 48 feature vector for each pair of images I test in the model with.
I tried to check some Previous posts and did the following:
print("Step 1: Loading Model")
model1=load_model("where/the/model/is/located", compile=False)
#I tried the output of the firstlayer, for example
model_with_intermediate_layers = Model(inputs=model1.input, outputs = model1.layers[0].output)
pred = model_with_intermediate_layers.predict([pair_1,pair_2], steps = 1)
print(pred)
What is the problem??
The problem with the code above is the fact that it only can access layers from 0, 1, 3 and 4. 0 and 1 give the input shapes, layer 3 gives me the score, and layer 4 is empty. **I would like to have access to an intermediate layer, especially the last layer of the feature extractor network. ** How can I do that?
Considering that (i) my Functional object is the second layer of the network; (ii) I want its final layer output; and (iii) the second layer output is the third layer's input, I solved the problem with the following code:
which gives me what I want