I am a graphics programming beginner working on my own engine and tried to implement frustum-aligned volume rendering.
The idea was to render multiple planes as vertical slices across the view frustum and then use the world coordinates of those planes for procedural volumes.
Rendering the slices as a 3d model and using the vertex positions as worldspace coordinates works perfectly fine:
//Vertex Shader
gl_Position = P*V*vec4(vertexPosition_worldspace,1);
coordinates_worldspace = vertexPosition_worldspace;
Result:
However rendering the slices in frustum-space and trying to reverse engineer the world space coordinates doesent give expected results. The closest i got was this:
//Vertex Shader
gl_Position = vec4(vertexPosition_worldspace,1);
coordinates_worldspace = (inverse(V) * inverse(P) * vec4(vertexPosition_worldspace,1)).xyz;
Result:
My guess is, that the standard projection matrix somehow gets rid of some crucial depth information, but other than that i have no clue what i am doing wrong and how to fix it.
Well, it is not 100% clear what you mean by "frustum space". I'm going to assume that it does refer to normalized device coordinates in OpenGL, where the view frustum is (by default) the axis-aligned cube
-1 <= x,y,z <= 1
. I'm also going to assume a perspective projection, so that NDCz
coordinate is actually a hyperbolic function of eye spacez
.No, a standard perspective matrix in OpenGL looks like
When you multiply this by a
(x,y,z,1)
eye space vector, you get the homogenous clip coordinates. Consider only the last two lines of the matrix as separate equations:Since we do the perspective divide by
w_clip
to get from clip space to NDC, we end up withwhich is actually the hyperbolically remapped depth information - so that information is completely preserved. (Also note that we do the division also for
x
andy
).When you calculate
inverse(P)
, you only invert the 4D -> 4D homogenous mapping. But you will get a resultingw
that is not1
again, so here:lies your information loss. You just skip the resulting
w
and use thexyz
components as if it were cartesian 3D coordinates, but they are 4D homogenous coordinates representing some 3D point.The correct approach would be to divide by
w
: