I am trying to understand the 3D reconstruction of Object using 3D structured Lighting scanner and I am stuck at the point where a method of decoding set of camera and projector correspondences to use to reconstruct a 3D point cloud. How exactly is 3D point cloud information acquired from the information obtained from those correspondences? I want to understand the mathematical implementation, not the code implementation.
How is point cloud data acquired from the structured light 3D scanning?
576 Views Asked by techno At
1
There are 1 best solutions below
Related Questions in POINT-CLOUDS
- Collison Detection between two point clouds using PCL
- Read .las (LiDAR data) in iOS
- How to read .ply file using PCL
- Bad Orientation of Principal Axis of a Point Cloud
- How can I read/transform the range images of the stanford bunny .ply-files?
- Algorithm to find k neighbors in a certain range?
- Slow OpenGL Geometry Shader DrawArrays / Transform Feedback
- Combine 2 point clouds constructed from disparity map
- Point cloud XYZ format specification
- Getting flat point cloud from disparity map
- warped/curved point clouds
- Remove Inf and NaN values in a Point Cloud in the fastest way
- Recommended binary format for point cloud + metadata?
- Incorrect angle detected between two planes
- Can I generate Point Cloud from mesh?
Related Questions in 3D-RECONSTRUCTION
- opencv sgbm produces outliers on object edges
- Serialization error during parfor
- 3D reconstruction using the projection matrices from the trifocal tensor
- Surface Reconstruction from Cocone algorithms
- Detecting/correcting Photo Warping via Point Correspondences
- Calculating essential matrix using rotation, translation and camera parameters
- How to find the 3D point for 2 images
- Rotating image from orientation sensor data
- Specifications of Checkerboard (Calibration) for obtaining maximum accuracy in stereo reconstruction
- Human height estimation using one mono calibrated camera
- dense 3D reconstruction having camera matrix
- Python: solvePnP( ) not enough values to unpack?
- How do ARCore or ARKit produce real-time augmentations of live video?
- What makes object representation and recognition hard?
- Android camera calibration without chessboard
Related Questions in 3DCAMERA
- Structured Light scanner - camera calibration problem
- How to control a 3D camera using a single quaternion representation for its orientation?
- How is point cloud data acquired from the structured light 3D scanning?
- Equation of line from center of camera to plane in ThreeJS
- (Processing)How to move a 3D object according to the screen's XY axis, instead of the world's X,Y,Z(PeasyCam)
- Determining camera location from 3 known points
- Qt3D How to move the rotation axis to the centre of the object?
- Godot 4 How to do third person 3d movement relative to camera angle
- 3d Camera Position given some points
- Glut glLoadMatrixf camera equivalent
- Google's Project Tango Get Points In Certain Screen Region
- Will this cause gimbal-lock?
- How do we fix our flickering depth image when using an Orbecc Astra Camera and Rviz?
- Direction based on the forward facing camera
- OpenGL: creating my own camera
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
assuming you used structured light method which uses some sort of lines (vertical or horizontal - like binary coding or de-brujin) the idea is as follows:
a light plane goes through the projector perspective center and the line in the pattern.
the light plane normal needs to be rotated with the projector rotation matrix relative to the camera (or world depends on the calibration). the rotation part for the light plane can be avoided if if treat the projector perspective center as system origin.
using the correspondences you find a pixel in the image that match he light plane. now you need to define a vector that goes from the camera perspective center to the pixel in the image and then rotate this vector by the camera rotation (relative to the projector or world. again' depending on the calibration).
intersect the light plane with the found vector. how to compute that can be found in wikipedia: https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection
the mathematical problem (3d reconstruction) here is very simple as you can see. the hard part is recognizing the projected pattern in the image (easier than regular stereo but still hard) and calibrating (finding relative orientation between camera and projector).