So I am running this code from magenta with some modifications:
outputs, final_state = tf.nn.dynamic_rnn(
self.cell,
m_seq,
sequence_length=lens,
initial_state=initial_state,
swap_memory=swap_memory,
parallel_iterations=parallel_iterations)
where self.cell
is a MultiRNN cell with two layers, m_seq
is a one hot length vector with shape [1,38] and state
is a tuple of two LSTMStateTuple
's with c
and h
having shape [128,512] (batch size and layer size).
When I run this I get:
InvalidArgumentError (see above for traceback): ConcatOp : Dimensions of inputs should match: shape[0] = [1,38] vs. shape[1] = [128,512]
Now I understand that this means a mismatch between the input m_seq
and the state. However, do both of the dimensions have to match (1 and 128, 38 and 512)? I do not really understand why this would be the case ie. why they have to match at all, since this is a dynamic rnn.
ConcatOp : Dimensions of inputs should match
I believe this answers my question. The batch-size (first argument) must match, but the second one (sequence-length) does not need to match because it is a dynamic RNN. In any case, it is possible to use a placeholder to adopt varying batch-sizes.