 # Playing with gl_FragDepth

I thought it would be quite interesting to find out how Z buffer works in FS. So…

I continued with my last example. I have a lots of quads that they are always looking to the camera with different sizes and I would like to imagine they are 3D objects. Therefore, a small quad, which is over a huge quad, possibly is covered by the big one.

The first problem is gl_FragCoord.z is non-linear so is useless because I send the size as uniform variable.

I have tried the following options to fake the gl_FragDepth

VS
varying vec4 position;

position = gl_Position.z;

FS
varying vec4 position;
uniform float radio; //To simulate the front face of a cube
uniform float depthRange; // zFar - zNear of gluPerspective

//Option 1 The most reasonable
gl_FragDepth = (position - radio) / depthRange;
//Option 2
gl_FragDepth = (position - radio) * gl_FragCoord.w;
//Option 3
gl_FragDepth = gl_FragCoord.z - (gl_FragCoord.w * radio);

No one of them work, it looks like the radio is huge in any option. any idea?

I reckon from the GLSL spec that gl_FragCoord contains the screen space x, y, z and 1 / w (which I believe is -1 / z_in_eye_space).

The best option until the moment is

//Option 4
gl_FragDepth = ((1.0/gl_FragCoord.w) - radio) / depthRange;

But the oclussion is only good among quads of similar radio. If the radio is very different or with objects which they arent calculated by FS it fails…

Why is so hard to modify the depth value?

Sorry to revive but I am in despair. I am only going to ask for a simple question.

What is the formula that equalizes gl_FragCoord.z?

I mean… gl_FragCoord.z == formula;

It depends on your projection matrix.
When you get world coordinates (Wx, Wy, Wz, 1) they are multiplied with projection matrix yielding post-perspective coordinates (Px, Py, Pz, Pw). Then Pz and Pw are values that are interpolated betwen vertices and division interpolated(Pz)/interpolated(Pw) is passed into gl_FragCoord.z.

See:

VS
varying position;

position = gl_Vertex * gl_ProjectionMatrix;

FS
varying position;

float depth = position.z / position.w;
else …

It is not discarded so is not equal

Dark Photon thanks but that thread is unfinished and I have seen them before. What is it supposed I should understand when I see glFrustum?

The following equations are very similar to gl_FragCoord.z. Anyone see something wrong?

uniform float zNear, zFar; // 1.0 to 300.0
position = gl_Position * gl_ModelViewMatrix;

gl_FragCoord.z ~ (zFar / (zFar - zNear)) + ( (zFar * zNear/(zNear- zFar)) / position.z);

gl_FragCoord.z ~ ( (1.0 / zNear) - (1.0 / position.z) ) / ( (1.0 / zNear) - (1.0 / zFar) );

The last works very well when you mix code without shader calculation. My final aim is to modify position.z adding any value, if the difference between values is not high it works properly… but both sometimes fail among objects calculated by Shaders

You should multiply also by gl_ModelViewMatrix, if your gl_Position is in object space:

``````position = gl_ModelViewProjectionMatirx * gl_Position;
``````

The formula from other thread is same as multiplying by MVP matrix and dividing in fragment shader. It is just using coefficients from perspective projection matrix directly in equation to compute gl_FragCoord.z. It will be different if you will use different kind of projection matrix.

Also, keep in mind that comparing two float values for equality in this context has a high chance of not working as you expect. For this test to work, you must use the exact same computation steps for the two values and somehow guarantee that the compiler won’t modify them. I assume that you used ftransform() to transform the vertex position; therefore, the position values (gl_FragCoord) are not necessarily computed exactly in the exact same order that you use for the “position” varying, which could yield discrepancies.

gl_Position is the vertex shader output in clip space. It doesn’t make a lot of sense to multiply it with the modelview or projection matrices.

To get from gl_Position (or a varying that is equal to gl_Position) to the fragment depth value you need to divide Z by W (transformation from clip space to normalized device coordinates), then map the range [-1, 1] to the [n, f] range specified with glDepthRange (which is completely unrelated to the near/far range specified when using glFrustum). Usually this range is [0, 1], so it boils down to:
Zwindow = 0.5 + 0.5 * Zndc = 0.5 + 0.5 * Zclip / Wclip.

Why don’t you specify the depth per vertex?

When I used gl_Position is a mistake. Really I wanted to mention gl_Vertex. I have edited it in the former post.

In the following formula can I suppose Z and W clip come from gl_FragCoord?
Zwindow = 0.5 + 0.5 * Zndc = 0.5 + 0.5 * Zclip / Wclip.
I prefer this one… but I am not sure if Z is in object, eye or projection coordinates… I think is in eye coordinates
( (1.0 / zNear) - (1.0 /Z) ) / ( (1.0 / zNear) - (1.0 / zFar) );

It is a good point. It is because later I want to render spheres, and for that each fragment the radio is variable but I am trying with a more simple figure at the moment.

A general formula it should work always with any figure is:

// zNear, zFar and radio are uniform variables

float height = radio; //Calculate the height for the figure using the radio, if quad is that
vec4 newPosition = position; // position = gl_Vertex;
newPosition = gl_ModelViewMatrix * newPosition;
newPosition.z = newPosition.z + height;
gl_FragDepth = ( (1.0 / zNear) - (1.0 / newPosition.z) ) / ( (1.0 / zNear) - (1.0 / zFar) );

No, those would be Z and W of gl_Position, i.e. the vertex shader output.

But the real question is in which space do you want to change Z. It seems you want to simulate spheres, so eye space would work. In that case you simply pass eye space Z as a varying, modify it, then apply the usual transformations going from eye space to clip space to NDC to window space.

``````varying float eyeDepth;

float eyeDepth_modified = eyeDepth + ...;
vec2 clipZW = eyeDepth_modified * gl_ProjectionMatrix.zw + gl_ProjectionMatrix.zw;
gl_FragDepth = 0.5 + 0.5 * clipZW.x / clipZW.y;
``````

Thank you Xmas but the result is the same. I show you the result of your code (I am using quads and using the radio I discard pixels to create spheres).

You can see how the yellow and brown sphere dont do a good occlusion with the red sphere.  