In Hartley and Zisserman's book "Multiple View Geometry in computer vision", when it comes to data normalization, it states
Namely the points should be translated so that their centroid is at the origin, and scaled so that their RMS (root-mean-squared) distance from the origin is $\sqrt{2}$. This means that the "average" point is equal to $(1, 1, 1)^T$.
Here is my implementation:
void Normalize(Eigen::Matrix2Xd* image_points, Eigen::Matrix3Xd* object_points,
Eigen::Affine2d* T, Eigen::Affine3d* U) {
// First transfer to canonical homogeneous form, i.e. the last entry is 1.
// Skipped because eigen will automatically promote the points to the
// homogeneous representation during transformation. So this function only
// takes as input non-homogeneous points.
// Compute centroid, aka. the mean average. The negative of that will be
// applied
const Eigen::RowVector2d image_centroid = image_points->rowwise().mean();
const Eigen::RowVector3d object_centroid = object_points->rowwise().mean();
// Compute scale. The reciprocal of that will be applied.
const double image_scale =
std::sqrt(image_points->squaredNorm() / (2.0 * image_points->cols()));
const double object_scale =
std::sqrt(object_points->squaredNorm() / (3.0 * object_points->cols()));
// Construct T and U similarity transformation matrices to:
// translate so that the mean is zero for each dimension
// scale so that the RMS distance is 2^(1/2) and 3^(1/2) respectively.
T->setIdentity();
U->setIdentity();
*T = (Eigen::Translation2d(Eigen::Vector2d(image_centroid)) *
Eigen::Scaling(image_scale))
.inverse();
*U = (Eigen::Translation3d(Eigen::Vector3d(object_centroid)) *
Eigen::Scaling(object_scale))
.inverse();
// Normalize the points according to the transformation matrices
*image_points = *T * image_points->colwise().homogeneous();
*object_points = *U * object_points->colwise().homogeneous();
}
This function takes as input image_points, object_points and output transformations T, and U which will be used to denormalize the corresponding points. In addition to normalizing 2D image points, it also normalizes 3D object points.
It seems my implementation is wrong.
I used these test data:
Eigen::Matrix2Xd image_points(2, 5);
image_points << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10;
Eigen::Matrix3Xd object_points(3, 5);
object_points << 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15;
And the normalized 2D image points are as follows:
-0.322329 -0.161165 0 0.161165 0.322329
-0.322329 -0.161165 0 0.161165 0.322329
Obviously, after the normalization, the RMS distance from the origin is certainly not $\sqrt{2}$.
I think my code logic is correct and the mathematical derivation is (probably) correct, too. But the result is somehow wrong.
Can someone shed some info to me? By the way, what does the author mean "the 'average' point"?