I have been trying to calculate distances using a frontal parallel stereo setup for a while and I cannot seem to find the reason I get the end-result wrong.
What I did:
Calibrated both cameras individually using MATLAB's Camera Calibrator
Calibrated stereo setup while fixing intrinsics
Loaded image pair from file, used StereoSGBM as well as WLS filter to get the best possible first result for testing
imgL = cv.imread(path_depth_testing+'/camL3.png')
imgR = cv.imread(path_depth_testing+'/camR3.png')
grayL = cv.cvtColor(imgL,cv.COLOR_BGR2GRAY)
grayR = cv.cvtColor(imgR,cv.COLOR_BGR2GRAY)
Left_rectified= cv.remap(grayL,Left_Stereo_Map[0],Left_Stereo_Map[1], cv.INTER_LANCZOS4, cv.BORDER_CONSTANT, 0)
Right_rectified= cv.remap(grayR,Right_Stereo_Map[0],Right_Stereo_Map[1], cv.INTER_LANCZOS4, cv.BORDER_CONSTANT, 0)
minDisparity = 0
maxDisparity = 16*25
numDisparities = maxDisparity-minDisparity
blockSize = 3
P1 = 8*3*blockSize**2
P2 = 32*3*blockSize**2
disp12MaxDiff = 50
uniquenessRatio = 15
speckleWindowSize = 16
speckleRange = 64
left_matcher = cv.StereoSGBM_create(minDisparity = minDisparity,
numDisparities = numDisparities,
blockSize = blockSize,
P1 = P1,
P2 = P2,
disp12MaxDiff = disp12MaxDiff,
uniquenessRatio = uniquenessRatio,
speckleWindowSize = speckleWindowSize,
speckleRange = speckleRange,
)
sigma = 1.5
lmbda = 8000.0
right_matcher = cv.ximgproc.createRightMatcher(left_matcher)
wls_filter = cv.ximgproc.createDisparityWLSFilter(left_matcher)
wls_filter.setLambda(lmbda)
wls_filter.setSigmaColor(sigma)
left_disp = left_matcher.compute(Left_rectified, Right_rectified)
right_disp = right_matcher.compute(Right_rectified,Left_rectified)
filtered_disp = wls_filter.filter(left_disp, Left_rectified,
disparity_map_right=right_disp)
temp = filtered_disp.astype(np.float32) / 16
points_3D = cv.reprojectImageTo3D(temp, Q)
disp_8 = (filtered_disp/256).astype(np.uint8)
colored = cv.applyColorMap(disp_8, cv.COLORMAP_JET)
- Used mouse callback to extract x, y location from screen and use corresponding X,Y,Z coordinates in depth map
def printCoordinates(event, x, y, flags, param):
if event == cv.EVENT_LBUTTONDOWN:
cv.circle(filtered_disp, (x, y), 50, (0, 255, 255), -1)
print('Coordinates on screen x={xscreen}, y={yscreen} [pixels]'.format(xscreen=x, yscreen=y))
xglobal = points_3D[y][x][0]
yglobal = points_3D[y][x][1]
zglobal = points_3D[y][x][2]
print('Coordinates as seen from left camera x={:.2f}mm, y={:.2f}mm, z={:.2f}mm'.format(xglobal, yglobal , zglobal))
This is the left camera's image original image
The resulting disparity map after WLS filter application and the colored version disparity map / colored disparity map
My question is: Why is the colored disparity map in shades of violate instead of the whole JET color range and is there a systematic error in my code that gives me incorrect depths (-50% of the actual depth) to all of the objects?