How to pass one data array per model input in multimodal deep autoencoder?

152 Views Asked by At

i'm working on a deep multimodal autoencoder for dimensionality reduction and i'm following this code (https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/8.2%20Multi-Modal%20Networks.html)

from keras.layers import Dense, Input
from keras.models import Model
from keras.layers.merge import concatenate

left_input = Input(shape=(784, ), name='left_input')
left_branch = Dense(32, input_dim=784, name='left_branch')(left_input)

right_input = Input(shape=(784,), name='right_input')
right_branch = Dense(32, input_dim=784, name='right_branch')(right_input)

x = concatenate([left_branch, right_branch])
predictions = Dense(10, activation='softmax', name='main_output')(x)

model = Model(inputs=[left_input, right_input], outputs=predictions)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit([input_data_1, input_data_2], targets)

What i would like to know is how to reconstruct the original data? and what are the input_data_1 and input_data_2 passed in model.fit? and how to pass one data array per model input?

0

There are 0 best solutions below