From my, somewhat limited, understanding of how point clouds work I feel that one should be able to generate a point cloud from a set of 2d images from around the outside of an object. The problem that I am experiencing is that I can not seem to find any examples of how to generate such a point cloud.
Generating point cloud from many 2d images
41.4k Views Asked by gilbertbw At
2
There are 2 best solutions below
0
Ben
On
VisualSFM is an application that allows 3D reconstruction. You can get a point cloud from multiple 2D images.
This video shows how to extract multiple images from a short clip of a tree and then use VisualSFM to create a point cloud.
Related Questions in 3D
- Issues with a rotation gizmo and sign flips when converting back to euler angles
- coded one, but renderer renders two 3D models in three.js
- WorldToScreen function
- Sweep shape along 3D path in Python
- How to add another panel or window to the open3d.visualization.O3DVisualizer class? (In python open3d)
- How to estimate the memory size of a binary voxelized geometry?
- LibGDX Normal Textures Not Showing Up in 3D (Blender) Model Java
- A way to warp an image based on a map
- 3D surface won't show data on plotly
- Creating 3D python data from index sums of 2D data
- 3D graph in Rstudio (time vs intensity vs coefficient)
- Combining multiple plots with mayavi
- JavaFX 3D API does not work on all Android deivces
- Manual retargeting - How to compute target pose bone positions correctly?
- How do I dynamically change vertex colors using Direct3d 12 and Visual C++?
Related Questions in POINT-CLOUD-LIBRARY
- Segmentation of a building (Pointcloud)
- How to compare 2 point-clouds?
- Given one set of 2D points as groud truth, how to match it with one observed set
- PCLVisualizer flashing when using .spinOnce()
- Filtering the point cloud coming from multiple lidars
- Why are the eigen vectors not calculated as expected for cube-like point clouds with the Eigen library?
- "[rosrun] Couldn't find executable named example below /home/USERNAME/catkin_ws/src/my_pcl_tutorial" Trying to get PCL working with ROS
- error: no matching function for call to -- ROS (C++)
- SOLVED No matching function for call to ‘pcl::VoxelGrid<pcl::PointXYZ>::VoxelGrid(bool) for using getRemovedIndices()
- vuforia area target point clound
- Incorrect reconstruction when moving a pointcloud using pcl::Poisson
- PointCloud upsampling
- How to make a rectangle by aligning the cloud of points obtained with four lidars?
- What's the principle of uniform_sampling in PCL?
- Volume Calculation on PointCloud
Related Questions in POINT-CLOUDS
- How can I generate a concave hull of 3D points?
- How to add another panel or window to the open3d.visualization.O3DVisualizer class? (In python open3d)
- How to input multi-channel Numpy array to U-net for semantic segmentation
- Autodesk RCP, RCS files reading
- 3D construction from set of 2D images using mobile camera
- Distance calculation between points on similar point clouds
- 2D PointCloud Visualization in Python
- Lack of precision when using the lidr package's segment_trees function
- KITTI dataset: ground truth labels (bird's eye view) match after an image generation?
- Open3d Triangle Mesh fill_holes() method leads to crash
- Predict x,y coordinates by z value in point cluster
- Kinect V1 not connecting to Kinect Studio v1.8.0
- Segmentation of a building (Pointcloud)
- How to compare 2 point-clouds?
- interactive big 2D point cloud data visualization on map with python
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular # Hahtags
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
In general, 3D shaped reconstruction from a sequence of 2D images is a hard problem. It can range from difficult to extremely difficult, depending on the amount of information that is known about the camera and it's relationship to the object and scene. There is a lot of information out there: try googling for "3D reconstruction image sequence" or "3D image reconstruction turn table". Here is one paper that gives a pretty good summary of the process and its challenges. This paper is good (and it introduces "RANSAC" - another good search keyword). This link frames the problem in terms of facial reconstruction, but the theory can be applied to this question.
Note that the interpretation of the 3D points is dependent upon knowledge of the camera's extrinsic and intrinsic parameters. Extrinsic parameters specify the location and orientation of the camera with respect to the world. Intrinsic parameters map pixel coordinates to coordinates in the world frame.
When neither the extrinsic nor intrinsic parameters are known, the 3D reconstruction is accurate to an unknown scale factor (i.e. relative size/distance can be established, but absolute size/distance is not known). When both sets of camera parameters are known, the scale, orientation, and location of the 3D points are known. The OpenCV documentation covers the concept of camera calibration well. This link, this link, and this link are good, too.