How to calculate the gradient with respect to "Input" layer in Caffe?

74 Views Asked by At

I want to implement the algorithm proposed by paper "Generalizing to unseen domains via adversarial data augmentation" using Caffe framework. I have to compute the gradient with regard of input layer to add it onto the input blob. In PyTorch, it can be done by grad = torch.autograd.grad(loss, data)[0]. But in Caffe, there is no function to do this as I know. So how to compute the gradient of "Input" layer in Caffe. The "Input" layer means input image in semantic segmentation.

I have tried call net->input_blobs()[0]->cpu_diff() after backpropagation, but values in cpu_diff are all 0. Obviously, Caffe does not compute the gradient of input layer in default. The overall algorithm is as the image shows.enter image description here

1

There are 1 best solutions below

4
guorui On BEST ANSWER

To get what you want, try something like

for (int i=0; i<top_vec[0]->count(); i++) {
    top_vec[0]->mutable_cpu_diff()[i] = 1.0;
}

net->Backward(top_vec, propagate_down, bottom_vec);

for (int i=0; i<bottom_vec[0]->count(); i++) {
    std::cout << i << " : " << bottom_vec[0]->cpu_diff()[i] << std::endl;
}