2D edge to 3D modelspace via glm::unproject - ray not intersecting with model

156 Views Asked by At

I try to do ray picking with collision, which means a 2D position on the screen is transformed into a 3D position in model space. My problem is, that the ray doesn't always intersect with the model to result in a valid 3D position.

Here's what I do:

I create a depth texture in a framebuffer to glReadPixels() from it later.

glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, screenWidth, screenHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);

cv::Mat_<float> depthCV(screenHeight, screenWidth);
glReadPixels(0, 0, screenWidth, screenHeight, GL_DEPTH_COMPONENT, GL_FLOAT, depthCV.ptr());

I linearize the depth afterwards and store it into a cv::Mat_<uchar> for edge detection via cv::Canny(): (Don't forget to flip the image.)

cv::blur(gray, edge, cv::Size(3, 3));
cv::Canny(edge, edge, edgeThresh, edgeThresh * ratio, 3);

... and get the contour via

cv::findContours(canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_NONE, cv::Point(0, 0));

The 2D coordinates (x,y) of the contour(s) are now being transformed in 3D model coordinates via:

GLint viewport[4];
GLfloat winX, winY, winZ;
glGetIntegerv(GL_VIEWPORT, viewport);
winX = (float)x;
winY = (float)viewport[3] - (float)(y);
glReadPixels(x, int(winY), 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ);
glm::vec4 viewportvec(viewport[0], viewport[1], viewport[2], viewport[3]);
glm::vec3 winCoords(winX, winY, winZ);

glm::vec3 modelCoords = glm::unProject(winCoords, view*model, proj, viewportvec);

After doing this and looking at the 3D points it shows, that some rays don't intersect with the model. The length of the ray is close to the value of the far-plane. I tried to broaden the input by looking also at the one pixel neighborhood around the input (x,y). Still some rays don't intersect with the model.

Any idea why this is? Or how to solve it? Is the edge extraction with previous blurring responsible for it?

Teapot

I attached an image where the distribution can be seen. The blue points are the ones not intersecting with the model (length of the ray close to the value of the far plane), the red ones are. I used the teapot model for my testing.

I didn't want to paste my whole code here for ease of reading. If more code or information is needed to understand better what might be wrong, please let me know.

1

There are 1 best solutions below

0
On

Is the edge extraction with previous blurring responsible for it?

Yes, most certainly.

I suggest that instead of an edge detection post processing filter you implement an edge extraction rasterizing fragment shader. It's quity simple actually:

Transform the face normal into screen coordinates (per fragment). If abs(screenspace_normal.z) < threshold the face is perpendicular to the screen, hence an edge. For all fragments that are not an edge discard the fragment. What remains are fragments that are (close to) an edge and can be further processed.

Be aware that this will also extract edges of "hidden" surfaces. So you should probably perform an "early Z" pass to depth test, then a second pass that does not write to the depth buffer and only emit edge fragments.