Why is it needed for us to calibrate the depth camera and color camera of a Kinect?

202 Views Asked by At

I am new to Kinect. I used to see on the Internet that the joint depth camera and color camera calibration of a Kinect has been finished when it goes out of factories. So what's the meaning of the calibration for a second time? On the other hand, as we all know that there are already some applications such as many somatosensory games that use Kinect, how do these applications finish calibration? It seems impossible that game players uses all kinds of calibration algorithms to get it done. Thanks!

2

There are 2 best solutions below

1
Deepfreeze On

The kinect has indeed an internal calibration. In your software you can therefore use the coordinateMapper functions to go from xyz world coordinates to uv color image coordinates.

Now imagine the following, you send your xyz data and an color image to someone else (or different computer for later processing). As you have no contact to the kinect anymore, you can't ask the coordinateMapper how to relate the 2 dataset.

This is why some people do their own kinect calibration. Because now the calibration parameters are available and can be shipped with the xyz and uv data.

That said, if you don't need this, stick to the coordinateMapper! The calibration of the kinect is not that easy to realize.

0
16per9 On

Supernova, user @Deepfreeze made a really good explanation. You should stick to the coordinateMapper for calibration.

The reason for this on Kinect v2 is that the image resolution isn't the same! That's why you really need to save this information to file. All this features are now luckily integrated in the Kinect SDK and can be easily accessed. Also you can find the information about it in the Kinect MSDN page.