I sent a photo to an apriltag program and got relative position and orientation from a camera. The world coordinates of the apriltags are already known. Then I tried to calculate the camera's world coordinates, but I could not. Would you please let me know how I can get the camera's world coordinates?
This code did not work properly.
import math
import numpy as np
import rclpy
import tf_transformations as tf # I use tf_transformations, but it is ok even if you do not use this.
# Info Pattern 1
ax, ay, az = 1.4924, 3.8678, 1.2938 # the world coordinates of apriltag
dx, dy, dz = 0, 0, 90 # the world degrees of rotation of apriltag
px, py, pz = -0.3636, 0.0944, 0.5891 # the relative position between camera and apriltag
ox, oy, oz, ow = 0.2206, 0.2171, -0.6671, 0.6776 # the relative orientation between camera and apriltag
wx, wy, wz = 1.9197, 4.2283, 1.6934 # the correct answer of the world coordinates
# Info Pattern 2
# ax, ay, az = 1.4379, 5.4897, 1.7913 # the world coordinates of apriltag
# dx, dy, dz = 0, 0, -180 # the world degrees of rotation of apriltag
# px, py, pz = -0.5335, -0.2042, 0.5824 # the relative position between camera and apriltag
# ox, oy, oz, ow = 0.4599, -0.4004, -0.6043, 0.5129 # the relative orientation between camera and apriltag
# wx, wy, wz = 1.882, 6.1578, 1.6934 # the correct answer of the world coordinates
# camera parameter
k = np.array([[736, 0, 960],
[0, 736, 540],
[0, 0, 1]])
# the relative position and orientation
apriltag_relative_position = [px, py, pz]
apriltag_relative_orientation = [ox, oy, oz, ow] # quaternion
# the world coodinates and rotation of apriltag
apriltag_world_position = np.array([ax, ay, az])
apriltag_world_rotation_degrees = [dx, dy, dz]
apriltag_world_rotation_radians = np.radians(apriltag_world_rotation_degrees) # radians
world_orientation_matrix = tf.euler_matrix(apriltag_world_rotation_radians[0],apriltag_world_rotation_radians[1], \
apriltag_world_rotation_radians[2], axes="sxyz")[:3, :3]
m180 = [[1, 0, 0, 0], [0, -1, 0, 0], [0, 0, -1, 0], [0, 0, 0, 1]]
rotation_matrix = tf.quaternion_matrix(quaternion)
#rotation_matrix = m180 @ rotation_matrix
rotation_matrix = rotation_matrix[:3, :3]
camera_world_position = np.array(apriltag_world_position) - np.dot(rotation_matrix.T, position)
print(f"world position: {camera_world_position}")
My result of the pattern 1 was [1.9211, 4.2247, 0.8731], but the right answer is [1.9197, 4.2283, 1.6934]. And my result of the pattern 2 was completely different from the correct answer. I use tf_transformations in my code, but it is ok even if you do not use this.
I guess I may be supposed to use the world rotation of apriltag, but I have no idea what to do. How can I get the right world coordinates of camera?
■Images
The world coordinates and rotations of apriltag