LSTM layer in Sequential model requires 3d input but only receives d2

25 Views Asked by At

Data is in 20 batches of 1000 characters with a vocab of 10510 words. While trying to create an LSTM model to detect ai generated text, my model returns

`ValueError: Exception encountered when calling layer 'sequential_1' (type Sequential).

Input 0 of layer "lstm_1" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received:(None, 10)`


`model = Sequential([
    Embedding(input_dim=vocab_words, output_dim=10,input_length=1000),
    LSTM(units=1000),
    Dense(1, activation='sigmoid')
])
model.fit(my_training_batch_generator, epochs=1, batch_size=20)

`

Model: sequential

Layer (type) Output Shape Param #

embedding (Embedding) (None, 1000, 10) 105100

lstm (LSTM) (None, 100) 44400

dense (Dense) (None, 1) 101

================================================================= Total params: 149601 (584.38 KB) Trainable params: 149601 (584.38 KB) Non-trainable params: 0 (0.00 Byte)


`

I have tried manually telling the LSTM layer to change input_shape=(20,len(tokenizer.word_index)+1) and other variations of this type. I was expecting the error to change but it has not.

0

There are 0 best solutions below