Error using dynamic rnn (magenta) : Dimensions of inputs should match: shape[0] = [1,38] vs. shape[1] = [128,512]

190 Views Asked by At

So I am running this code from magenta with some modifications:

outputs, final_state = tf.nn.dynamic_rnn(
                self.cell,
                m_seq,
                sequence_length=lens,
                initial_state=initial_state,
                swap_memory=swap_memory,
                parallel_iterations=parallel_iterations)

where self.cell is a MultiRNN cell with two layers, m_seq is a one hot length vector with shape [1,38] and state is a tuple of two LSTMStateTuple's with c and h having shape [128,512] (batch size and layer size).

When I run this I get:

InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,38] vs. shape[1] = [128,512]

Now I understand that this means a mismatch between the input m_seq and the state. However, do both of the dimensions have to match (1 and 128, 38 and 512)? I do not really understand why this would be the case ie. why they have to match at all, since this is a dynamic rnn.

2

There are 2 best solutions below

0
On

ConcatOp : Dimensions of inputs should match

I believe this answers my question. The batch-size (first argument) must match, but the second one (sequence-length) does not need to match because it is a dynamic RNN. In any case, it is possible to use a placeholder to adopt varying batch-sizes.

3
On

From dynamic RNN docs:

The first two dimensions must match across all the inputs, but otherwise the ranks and other shape components may differ.

So the input dimensions have to match even though it is a dynamic RNN.