According to the thread on this page, The equation given for calculating the depth buffer:
F_depth = 1/z - 1/n/(1/f - 1/n)
is non-linear only because of the perspective divide.(Note that this is a combination of from the view-space z coord to window coord directly)
So, as per my understanding:
to convert it to a linear depth buffer, the only thing we would do is to remove the perspective divide(?) and then perform the glDepthRange(a,b)
given here.
In that case, the equation would be like this:
z_linear = z_NDC * W_clip = -(f+n)/(f-n)*z_eye + ( 2fn/(f-n) )
and, with depth range transformation:
z_[0,1] = ( z_linear + 1 ) /2
= ( (f+n)*z_eye - 2fn + f - n )/ ( 2(f-n) )
but, in the learnopenGL site for depth testing this is done:
First we transform the depth value to NDC which is not too difficult:
float ndc = depth * 2.0 - 1.0;
We then take the resulting ndc value and apply the inverse transformation to retrieve its linear depth value:
float linearDepth = (2.0 * near * far) / (far + near - ndc * (far - near)
how is the non-linear to linear depth-buffer being computed?(i.e. equation being formed)?
Using glm in a right handed system I found the following solutions for converting from ndc depth [-1, 1] to eye depth [0-far].
perspective projection:
orthographic projection:
I advise you to test with your own near and far value to check the final result.