I am doing a project in opencv to detect obstacle in the path of a blind person using stereo calibration. I have calculated the disparity map correctly. Now to find the distance of obstacle from the camera, I want its 3D coordinates [X,Y,Z] , which I am guessing can be found by reprojectImageTo3D(), but I dont have the Q matrix to use in this function because the Q matrix I am getting from stereoRectify() is coming null probably because I used pre calibrated images. Although I do have the inrinsic and extrinsic parameters of my camera. So my question is that how can I manually create the Q matrix to directly use in the function reprojectImageTo3D(), if I know the focal length, baseline and everything else about my camera? What is the basic format of the Q matrix?
Q matrix for the reprojectImageTo3D function in opencv
27.1k Views Asked by g.alisha12 At
2
There are 2 best solutions below
4
Javier Abellán Ferrer
On
if you want to create directly Q matrix:
cv::Mat Q;
Q.at<double>(0,0)=1.0;
Q.at<double>(0,1)=0.0;
Q.at<double>(0,2)=0.0;
Q.at<double>(0,3)=-160; //cx
Q.at<double>(1,0)=0.0;
Q.at<double>(1,1)=1.0;
Q.at<double>(1,2)=0.0;
Q.at<double>(1,3)=-120; //cy
Q.at<double>(2,0)=0.0;
Q.at<double>(2,1)=0.0;
Q.at<double>(2,2)=0.0;
Q.at<double>(2,3)=348.087; //Focal
Q.at<double>(3,0)=0.0;
Q.at<double>(3,1)=0.0;
Q.at<double>(3,2)=1.0/95; //1.0/BaseLine
Q.at<double>(3,3)=0.0; //cx - cx'
But you should calibrate both cameras and then get Q matrix from cv::stereoRectify. Be carefull, read Q matrix as double values.
Related Questions in OPENCV
- segmentation fault: 11, extracting data in vector
- Disable OpenCL in OpenCV completely
- Python - Writing your own function with opencv giving an error
- Opengl Augmented Reality in Android from solvepnp
- OpenCv Multispectral Image openCV
- Displaying bitmap image on Android (OpenCV)
- Applying homography on non planar surface
- BackgroundSubtractor getBackgroundImage() function return empty Image
- How to choose good SURF feature keypoints?
- opencv python error: Assertion failed (size.width>0 && size.height>0)
- CIDetector to filter rectangle and get cropped image
- How to detect squares in video with OpenCV?
- Python OpenCV error: (-215) size.width>0 && size.height>0 in function imshow
- OpenCV algorithm of contours searching and creation of bounding rectagle
- OpenCV Opening/Closing shifts the positions of the pixels
Related Questions in DEPTH
- denormalizing depth values
- What parameter should be changed in SGBM algorithm to invert depth value in depth image
- Error : CV_8U in function initialize
- Looking for c / c++ library to generate a PointCloud ot an Depth Image / Ranged Map
- How to refactor to reduce nesting depth with try/multiple catches (NDepend)
- "Depth" with moving Flash sprites
- Remove background from kinect depth data
- Depth / Ranged Image to Point Cloud
- Revert SVN repo to sparse checkout (undo 'svn up --set-depth infinity')?
- DirectX 11 Depth Rendering
- OpenCv - Depth Map
- change range of the kinect depth camera
- Finding depth of tree haskell
- Graph algorithm for dividing students into two groups
- find the edge based on normals
Related Questions in STEREO-3D
- how do I apply the stereo effect to a video mapped sphere?`
- opencv sgbm produces outliers on object edges
- Filtering in 3D space with CUDA, horizontal access faster than vertical access?
- black borders to the left of objects in opencv SGBM depth map
- Stereo with openCV
- Decrease noise in Disparity map
- disparity for points in the unrecitifcated image
- How can i improve stereo calibration accuracy?
- Python - Perspective transform for OpenCV from a rotation angle
- Stereo camera point cloud using viz in opencv
- What's the difference between reprojectImageto3D(OpenCV) and disparity to 3D coordinates?
- OpenCV disparity output not making sense
- What is disparity space image (DSI)
- OpenCV PointCloud from Depth map
- Render stereoscopic image on 3d tablet from processing android
Related Questions in 3D-RECONSTRUCTION
- opencv sgbm produces outliers on object edges
- Serialization error during parfor
- 3D reconstruction using the projection matrices from the trifocal tensor
- Surface Reconstruction from Cocone algorithms
- Detecting/correcting Photo Warping via Point Correspondences
- Calculating essential matrix using rotation, translation and camera parameters
- How to find the 3D point for 2 images
- Rotating image from orientation sensor data
- Specifications of Checkerboard (Calibration) for obtaining maximum accuracy in stereo reconstruction
- Human height estimation using one mono calibrated camera
- dense 3D reconstruction having camera matrix
- Python: solvePnP( ) not enough values to unpack?
- How do ARCore or ARKit produce real-time augmentations of live video?
- What makes object representation and recognition hard?
- Android camera calibration without chessboard
Related Questions in DISPARITY-MAPPING
- Combine 2 point clouds constructed from disparity map
- What is disparity space image (DSI)
- Why number of disparity must be divisible by 16?
- Rectifying images on opencv with intrinsic and extrinsic parameters already found
- OpenCV Face detect in stereo cameras
- disparity map from 2 consecutive frames of a SINGLE calibrated camera. Is it possible?
- Q matrix for the reprojectImageTo3D function in opencv
- Depth representation in disparity map rendered using SGBM is wildly inaccurate
- Saving disparity map using OpenCV in PFM format
- I generated a disparity map using OpenCV's Stereo Vision, but all I see is a black screen
- Weird stereo disparity map and point cloud in MATLAB
- Issue with cvReprojectImageTo3D and CV_32FC3
- How a good depth map can be created with stereo cameras?
- Is it possible to create / use the V-disparity map with data from a Time of Flight sensor (instead of normally used RGB-D Stereo Vision approach)?
- Tensorflow lite - depth map - based on stereo vision
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
The form of the Q matrix is given as follows:
In that image, cx and cy are the coordinates of the principal point in the left camera (if you did stereo matching with the left camera dominant), c'x is the x-coordinate of the principal point in the right camera (cx and c'x will be the same if you specified the
CV_CALIB_ZERO_DISPARITYflag forstereoRectify()), f is the focal length and Tx is the baseline length (possibly the negative of the baseline length, it's the translation from one optical centre to the other I think).I would suggest having a look at the book Learning OpenCV for more information. It's still based on the older C interface, but does a good job of explaining the underlying theory, and is where I sourced the form of the Q matrix from.