Mostly all answers I've found involve multiplying a vector of normalised device coordinates
by a inverse(projection * view)
matrix, however every example I've tried results in at least two invalid things..
- No variation of the
worldray.xy
at varyingndc.z
ranges, preventing me from generating a direction vector at varying near/far planes - An invalid
worldray.z
Can someone provide working generation of world ray from mouse coordinates?
Edit:
I've added the code I'm using, if I use inverse
z
is completely off from where I expect it to be, at least with affineInverse
I get an accurate z
for near
mat4 projection = perspective(radians(fov), (Floating)width / (Floating)height, 0.0001f, 10000.f);
vec3 position = { 0, 0, -2 };
vec3 direction = { 0, 0, 1 };
vec3 center = position + direction;
mat4 view = lookAt(position, center, up);
vec2 ndc = {
-1.0f + 2.0f * mouse.x / width,
1.0f + -2.0f * mouse.y / height
};
vec4 near = { ndc.x, ndc.y, 0, 1 };
vec4 far = { ndc .x, ndc .y, -1, 1 };
mat4 invP = inverse(projection);
mat4 invV = inverse(view);
vec4 ray_eye_near = invP * near;
ray_eye_near.z = near.z;
vec4 ray_world_near = invV * ray_eye_near;
ray_world_near /= ray_world_near.w;
printf("ray_world_near x: %f, y: %f, z: %f, w: %f\n\r", ray_world_near.x, ray_world_near.y, ray_world_near.z, ray_world_near.w);
vec4 ray_eye_far = invP * far;
ray_eye_far.z = far.z;
vec4 ray_world_far = invV * ray_eye_far;
ray_world_far /= ray_world_far.w;
printf("ray_world_far x: %f, y: %f, z: %f, w: %f\n\r", ray_world_far.x, ray_world_far.y, ray_world_far.z, ray_world_far.w);
Here is a screenshot of what I'm experiencing
Edit 2: These are the numbers I get if using inverse
instead of affineInverse
and dividing by w
This is the function I use to generate a normalized ray from screen space into the scene:
For example,
This will give you back a normalized ray from which you can get a parametric position along that ray into the scene with
glm::vec3 worldPos = cameraPos + t * rayMouse
, for example whent=1
,worldPos
would be 1 unit along the mouse cursor into the scene, you can use a line rendering class to better see what is happening.Note:
glm::unproject
can be used to achieve the same result:Note: These functions cannot be used to get an exact world space position of a fragment at the mouse coordinates, for that you have three options AFAIK:
glReadPixels
to get the depth value at the mouse/texture coordinate, which you can convert back from NDC to world space.Extra: 4. if you are doing object picking, you can get pixel perfect GPU mouse picking by using a buffer to tag each different object in the scene and use
glReadPixels
to ID the object from its unique color tag.I typically use option 1. for 3D math workflows and find it more than suffices for things like object picking, dragging, drawing 3D lines, etc...