ValueError: No gradients provided for any variable: ['x_hat:0']

97 Views Asked by At

I am working on transforming images in order to make adversarial attacks in computer vision systems that are robust to rotation. I would like to find an x_hat image that could optimize a mean loss function after several random rotations. Here is how I initialize the data and variables.

#read image
img = cv2.imread("4.jpg")
img= img.reshape(224,224,3)
img = (np.asarray(img) / 255.0).astype(np.float32)

#initialize x_hat and labels
x_hat = tf.Variable(img,name = 'x_hat') 
y_hat = 11
labels = tf.one_hot(y_hat, 12)

Here is the function I would like to optimize

def cost2():
    image=x_hat

    #Now it will generate 100 samples rotated
    num_samples = 10
    average_loss = 0

    for j in range(num_samples):
          
        #rotate randomly the image
        rotated=tf.keras.preprocessing.image.random_rotation(image.numpy(),
        tf.random.uniform(shape=(),minval=40, maxval=90), channel_axis=2)

        rotated_logits, _ = resnet(rotated)
        sum_loss+=-1 * tf.nn.softmax_cross_entropy_with_logits(logits=rotated_logits, labels=labels)

    return sum_loss/num_samples

Here is the optimizer I would like to use

learning_rate = 1e-1
optim = tf.optimizers.SGD(learning_rate=learning_rate)

Finally, here is how I found x_hat

epsilon = 2.0/255.0 # a really small perturbation
x=img

below = x - epsilon
above = x + epsilon

demo_steps = 200


# projected gradient descent
for i in range(demo_steps):

    loss = optim.minimize(cost2, var_list=[x_hat])

    if (i+1) % 10 == 0:
        print('step %d, loss=%g' % (i+1, loss.numpy()))

    projected = tf.clip_by_value(tf.clip_by_value(x_hat, below, above), 0, 1)

    with tf.control_dependencies([projected]):
        x_hat.assign(projected)

adv_robust = x_hat.numpy() 

However, once the code is run, I have the following error:

ValueError: No gradients provided for any variable: ['x_hat:0'].

Where is my mistake here?

0

There are 0 best solutions below