How could I get the gradient of the loss at input data, not the variable weight and bias
lstm_cell = rnn_cell.BasicLSTMCell(n_hidden, forget_bias=0.0)
outputs, states = rnn.rnn(lstm_cell, x, dtype=tf.float32)
pred = tf.matmul(outputs[-1], weights['out'] + biases['outs'])
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
self.optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
compute_gradients = optimizer.compute_gradients(cost)
train = optimizer.apply_gradients(compute_gradients)
with tf.Session() as sess:
sess.run(init)
fd = {x:batch_x, y:batch_y}
sess.run(train, feed_dict=fd)
grad_vals = sess.run([(g,v) for (g,v) in compute_gradients], feed_dict=fd)
i could calculate the gradient at weight and bias,so how could i get the gradient at batch_x directly?
input_grad = sess.run(tf.gradients(cost, batch_x), feed_dict=fd)
the input_grad value is [None].
The question is resolved in the comment. batch_x should be replaced with x in the line below: