For a project I'm working on I have written a function which should take position and rotation in global coordinate frame of a robot and a camera, and project the postion of the robot into the camera coordinate image pane. The intrinsics of the camera are described by a 3x3 calibration camera K The offset of the robot camera from the imu of the robot is described by the 4x4 transformation matrix spot_T_imu_cam (transformation from camera to imu) The same is with the offset of the camera, using the transformation matrix HL_T_imu_cam. This transformation runs, but the results don't appear to be correct. I am looking for something I might have missed.
Id = np.append(np.identity(3), np.matrix([0,0,0]).T, axis=1)
import pytransform3d.transformations as pytr
from pytransform3d.transformations import invert_transform
def get_projected_point(pq_spot, pq_HL, p=[0., 0., 0.]):
T_G_spot = pytr.transform_from_pq(pq_spot)
T_G_HL = pytr.transform_from_pq(pq_HL)
T_HL_spot = invert_transform(HL_T_imu_cam) @ invert_transform(T_G_HL) @ (T_G_spot) @ (spot_T_imu_cam)
p_in_HL_frame = T_HL_spot @ np.array([p[0], p[1], p[2], 1.])[:, None]
assert p_in_HL_frame[3] == 1.
point2d = K @ Id @ p_in_HL_frame
# point2d = point2d/point2d[2]
# assert point2d[2] == 1.
# point2d = K @ p_in_HL_frame[:3]
return point2d
point2d = get_projected_point(pq1,pq2)
ax.scatter([point2d[0]], [point2d[1]], color='red')
point2d = get_projected_point(pq1,pq2, p=[0, 0, -1])
ax.scatter([point2d[0]], [point2d[1]], color='green')
In my current attempts I have tried double-checking the transformation matrices I use, as well as re-recording the input global positions. Neither has worked