I'm trying to get the latent space from an autoencoder in order to build a plot and watch its behavior.
I don't really know if I can get it from the RepeatVector
or if I have to add a Dense
layer.
Here is my code:
model = Sequential()
input_shape = (X_train.shape[1], X_train.shape[2])
model.add(LSTM(16, activation='relu', return_sequences=True, input_shape=input_shape)) #Encoder
model.add(LSTM(4, activation='relu', return_sequences=False)) #Encoder
model.add(RepeatVector(X_train.shape[1])) #Latent
model.add(LSTM(4, activation='relu', return_sequences=True)) #Decoder
model.add(LSTM(16, activation='relu', return_sequences=False)) #Decoder
model.add(TimeDistributed(Dense(X_train.shape[2]))) #Decoder
How do I get the Latent Space Representation?