Relationship between memory cell and time step in LSTM

716 Views Asked by At

i'm studying LSTM model.

Does one memory cell of hidden layer in LSTM correspond to one timestep?

example code) model.add(LSTM(128, input_shape = (4, 1)))

When implementing LSTMs in Keras, can set the number of memory cells, as in the example code, regardless of the time step. In the example it is 128.

but, A typical LSTM image is shown to correspond 1: 1 with the number of time steps and the number of memory cells. What is the correct answer?

enter image description here

2

There are 2 best solutions below

0
On

as I understand timestep is a length of Sequence per each processing (=Window_Size)... that (dependently on parameter "return_sequences=True/False") will return either multi- or single- output per each step of data processed... like here explained & showed ...

explanation here seems to be better

concerning memory cell - here "A part of a NN that preserves some state across time steps is called a memory cell." - make me consider memory cell to be, probably, a "container" - each for temporal weights per vars in window series, till update of them during further backpropagation (when statefull=True) --

BETTER TO SEE ONCE - pic here memory cell & the logics of its work here

KNOW usage of the whole shape - here - time_steps for backpropagation

0
On

In LSTM, we supply input in the following manner [samples,timesteps,features] samples is for number of training examples you want to feed at a time timesteps is how many values you want to use Say you mention timesteps=3 So values at t,t-1 and t-2 are used to predict the data at t+1 features is how many dimensions you want to supply at a time LSTM has memory cells but I am explaining the code part so as not to confuse you I hope this helps