i have some understanding problems on how to get the disparity map from two images of one scene. I currently can extract some feature points and filter them so just the right correspondences are shown (lets say there are in total 60 feature points i can get).
To get the disparity of x1 and x2, i know that i have to do this:
d = x1 - x2;
My problem is how to proceed from here. Both pictures have like 1000x1500 pixels and i only get the disparity of 60 pixels (because i have i.e. 60 feature points). How do i get the other disparities?
My current code (in matlab, self written) cant extract more than a cretain amount of features.
Should i look for a better extraction alogroithm? Or is there another way to get the disparity from my current data? (i can also calculate the rotation matrix R, the translation matrix T, the essential matrix E, i have the baseline, the calibration matrix of both cameras and so on.)
I use the middleburry stereo data set from 2014. http://vision.middlebury.edu/stereo/data/scenes2014/
Thx in advance for any help:) (sorry if there are spelling errors)
Disparity maps are generally linked to dense stereo vision.
Oppositely, resorting to feature extraction, you are in sparse domain.
Dense stereo matching algorithms will help you to obtain a disparity map. See https://ww2.mathworks.cn/help/vision/ref/disparitybm.html for a matlab instance, or https://docs.opencv.org/3.4/d2/d85/classcv_1_1StereoSGBM.html for an OpenCV implementation.
The general idea is that with a dense method you try to match pixels (or more commonly blocks) between the left and right images.
Some more info in this answer and in the documentation linked above.