How do I get the 3D points of a camera with known initial extrinsics?

1.1k Views Asked by At

I am working on estimating the pose of an object with apriltags attached to them.

I have initially done this successfully for an apriltag board:enter image description here

The 3D points were found using the tag radius, (tag_size/2) as shown in the code:

ob_pt1 = [-tag_size/2, -tag_size/2, 0.0]
ob_pt2 = [ tag_size/2, - tag_size/2, 0.0]
ob_pt3 = [ tag_size/2,  tag_size/2, 0.0]
ob_pt4 = [-tag_size/2,  tag_size/2, 0.0]
ob_pts = ob_pt1 + ob_pt2 + ob_pt3 + ob_pt4
object_pts = np.array(ob_pts).reshape(4,3)

Now I have to estimate the pose for an object with apriltags attached to them. I now have the known initial pose (rotation and translation vectors) for the apriltags stuck on an object.

I have used Rodrigues on the rotation vectors to get the rotation matrix. Using this: enter image description here

I know I have to also apply the translation vector to the rotation matrix. And this would be the 3x4 matrix, that is the pose. And I know the the Z would always be '0', but I feel lost on how to go about using this for the 3D points.

My question is:

  1. How do I use these known extrinsics to get the 3D points for this object? Do I solve for X and Y and just apply Z as 0? How can I go about doing this?

Any help would be greatly appreciated!

0

There are 0 best solutions below