I am new to Kinect. I used to see on the Internet that the joint depth camera and color camera calibration of a Kinect has been finished when it goes out of factories. So what's the meaning of the calibration for a second time? On the other hand, as we all know that there are already some applications such as many somatosensory games that use Kinect, how do these applications finish calibration? It seems impossible that game players uses all kinds of calibration algorithms to get it done. Thanks!
Why is it needed for us to calibrate the depth camera and color camera of a Kinect?
202 Views Asked by supernova At
2
There are 2 best solutions below
0
16per9
On
Supernova, user @Deepfreeze made a really good explanation. You should stick to the coordinateMapper for calibration.
The reason for this on Kinect v2 is that the image resolution isn't the same! That's why you really need to save this information to file. All this features are now luckily integrated in the Kinect SDK and can be easily accessed. Also you can find the information about it in the Kinect MSDN page.
Related Questions in KINECT
- Kinect V1 not connecting to Kinect Studio v1.8.0
- Kinect v1 Camera not connect with Kinect SDK
- There is a conflict when executing 'openni.launch' and 'amcl_demo.launch' in separate terminals
- Kinect Camera not recognized by Raspberry Pi
- Why is "from pykinect2 import PyKinectV2 " not working?
- Accessing Kinect for Xbox 360 depth camera using PyKinect
- Is it possible to rewrite pykinect2 to be compatible with the Xbox 360 Kinect?
- How can I solve R6025 pure vital function call error when I run a sketch of Processing for Kinect v1?
- Gazebo Kinect Plugin Point Transformation for RayShape
- Working Xbox Kinect support within python?
- System.TypeLoadException not load type 'System.Diagnostics.Eventing.EventDescriptor'
- OpenCV StereoCalibrate not returning expected values
- Segmentation fault (core dumped) error when running Python script with pykinect_azure and MoveIt
- Why is my audio file out of order when I try to record audio from the Kinect using NAudio?
- Unable to run pykinect2 on python 3.11.1
Related Questions in CAMERA-CALIBRATION
- vector of distortion coefficients returned by calibrate camera function opencv
- How do auto camera trackers get to a camera pose without camera intrinsics and/or survey data
- How do I improve my reprojection error? Charuco Board Calibration
- OpenCV Stereo Calibration: Getting y-offset in rectified images despite using pixel-perfect synthetic keypoint matches
- Are camera calibration matrices (intrinsic (K) and extrinsic (P)) supposed to be unique or is only the homography KP unique
- OpenCV Fish Eye Undistorting Issues for Object Detection
- Fundamental matrix from camera matrices while preserving transpose
- Extrinsic camera matrix if translation performs before rotation
- Is it possible to move the plane defined by a homography estimation by some depth z
- OpenCV & Python - How to convert pixel to millimeters (get object coordinates from arbitrary origin)
- Inconsistent Intrinsic Parameters in Stereo Camera Calibration Using ChAruco Board
- How to Correct Epipolar Lines in Stereo Vision Matching in python?
- How to find camera's intrinsic matrix from focal length?
- I have to calibrate the camera and 2D lidar and transform the coordinates from lidar to the camera coordinate. Has anyone any idea about it?
- Camera calibration for Nuscenes dataset for Radar only
Related Questions in COLOR-DEPTH
- Disparity Map obtained through StereoSGBM is flickering
- depth map of specific object
- Converting RGB images into greyscale using a color palette
- How to replicate IrfanView's colordepth reduction using ImageMagick?
- how does depth_multiplier in tensorflow DepthwiseConv2D work?
- Align RGB image to Depth Image using Intrinsic and Extrinsic Matrix
- How to properly use OpenCV VideoWriter to write monochrome video with float32 source data type in Python?
- Tesseract produced searchable PDF with 8bit depth back to 1bit (tess4j)
- Get pixels color value in float (HDR)
- How can I get disparity map and depth map from the feature matching result?
- How to set fewer than 32 bpp for SDL window?
- Color Depth React Native
- segment non-homogeneous image background with opencv
- How to align already captured RGB and Depth images
- Generating 3D point cloud with predicted Depths
Related Questions in KINECT-INTERACTION
- Finding the offset(difference between coordinates) for Kinect
- Generate 3D model of a building with kinect sensor?
- Kinect HandPointer loose grip out of the screen
- Finger tracking using Kinect SDK in C++
- Kinect SDK 1.7 - How to work with both hands
- Kinect SDK 1.7 - How to control WPF elements Inside/outside the Kinect Region
- Controlling web browser JS components with Microsoft Kinect
- Reading/Writing ElevationAngle in Kinect throws InvalidOperationException
- As with basic zoom controls kinect for windows v1.8
- Separate noise from skeleton with Kinect
- Kinect v2 Connectivity Issue
- Printing live calculated values onto main window
- Kinect installation error
- Microsoft Kinect and background/environmental noise
- Kinect for Windows v2 hand cursor outside WPF windows
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
The kinect has indeed an internal calibration. In your software you can therefore use the coordinateMapper functions to go from xyz world coordinates to uv color image coordinates.
Now imagine the following, you send your xyz data and an color image to someone else (or different computer for later processing). As you have no contact to the kinect anymore, you can't ask the coordinateMapper how to relate the 2 dataset.
This is why some people do their own kinect calibration. Because now the calibration parameters are available and can be shipped with the xyz and uv data.
That said, if you don't need this, stick to the coordinateMapper! The calibration of the kinect is not that easy to realize.