Looking at this tutorial, I noticed an interesting little calculation to get per-fragment depth from the camera.

float z = gl_FragCoord.z / gl_FragCoord.w;I find this works for me, and that it appears to be fairly accurate in terms of distance.

Now, certain other people are very very certain that this is not a valid way to get depth /distance from the camera.

Can anyone shed any light onto the following:

(a) Whether this is in fact a valid and accurate way to get depth;

(b) How the mathematics behind it work.

Doesn’t make much sense to me. gl_FragCoord.z should be a 0…1 window-space depth value. And gl_FragCoord.w, for PERSPECTIVE projections only, should be -1/( eye-space Z ). Divide those and what do you get?

No clue.

You can refer to chapter 2.13 “Coordinate Transformations” of OpenGL specs, and to the fact, that gl_FragCoord.w in fragment shader is always 1/clp_w.

The path of eye_z in pipeline is as follows:

- from eye-space to clip-space: clp_z = (gl_ProjectionMatrix * eye_pos).z;
- from clip-space to normalized device coordinates: ndc_z = clp_z/clp_w;
- from ndc to window space: wnd_z = ndc_z * (dfar-dnear)/2 + (dfar+dnear)/2, where dfar and dnear are parameters from glDepthRange and mostly equal to 0 and 1, so for simplicity let’s assume gl_FragCoord.z = wnd_z = (clp_z/clp_w)/2 + 0.5. Thus, when you divide this by gl_FragCoord.w (which is equal to multiply by clp_w) you will get clp_z/2 + clp_w/2;

Now let’s have a look at what is clp_z and clp_w. If you look at projection matrix, and especially at it’s 3rd row (which is responsible for calculation of clp_z), it’s clearly seen that clp_z is the product of scale and shift operations upon eye_z: from [eye_near…eye_far] range to [-1…1] range, while clp_w is equal to -eye_z (4th row); That way, you can write your equation (clp_z/2 + clp_w/2) as: (A*eye_z + B)/2 - eye_z/2 = C*eye_z + D - which is not equal to eye_z in common case, but is linear combination of it with constant coefficients.

So you can use it as a measure to compare depths, or as a parameter in any formula that have linear dependency from eye_z and etc., but not as direct measure of eye_z. For that purpose you need inversion of projection matrix, or simply store eye_z in vertex shader and send it to fragment shader as a varying;

This. This is what I wanted to know.

Thank you, nuclear_bro.