gl_FragDepth portability issues?

I’m writing an OpenGL renderer for an old game. Originally, it didn’t use any depth testing at all - just rendered its polys back to front.

I want to use depth testing to break the order dependency on opaque objects, but normal depth testing is per-pixel and I need per-poly to simulate the original algorithm used by the game. (I don’t want to get into why per-pixel won’t work, so please take my word on that.)

To accomplish that, I am just setting gl_FragDepth = depth; in the fragment shader so that the entire polygon gets the same depth. This works exactly as indended… on my own machine. I have a geforce 9800 GT.

I sent a build out for testing on some other cards and the results were a bit surprising.

Radeon HD 4850: Works perfectly.
Radeon X1650 Pro: gl_FragDepth seems to be coarsely quantized.
Geforce 6800 GT: gl_FragDepth appears extremely quantized - only 2 levels visible!

Here’s my test setup:

Camera: znear 1/16, zfar 90
Terrain mesh heightmapped, unit grid size, rendered with the depth shader so that each triangle’s depth is the minimum z of any of its vertices.

After rendering the mesh, I render a view aligned plane (a large quad) at a certain z. The plane is rendered with shaders off (revert to fixed function). Because the plane cuts through the mesh, and the plane is rendered with depth testing ON, the behavior of the depth buffer can be visualized in this way.

I animate the plane from z=-10 to -40 in unit step size. Because the mesh grid is also unit size, as the plane cuts through the mesh, at most one strip of triangles should be revealed in each step of the animation. This is exactly what happens on my geforce 9800 GT.


But, look what happens on the radeon X1650 Pro:


Notice that sometimes z changes and no triangles are revealed, sometimes it changes and a big chunk gets revealed.
The Geforce 6800 GT is even worse - there are only 2 z-levels where any visible change occurs at all (!)

I am not doing anything fancy - just using the most basic features of opengl 2. Here is the actual code from my fragment shader (no vertex used shader atm):

uniform sampler2D base;
void main() {
	gl_FragColor = texture2D(base, gl_TexCoord[0].st);
	gl_FragDepth = gl_Color.a;

Any idea why this would work on my card but not on those others? Does the spec allow anything implementation dependent with regards to gl_FragDepth that I need to take into account?

You’re stuffing the color varying’s alpha in as the frag depth. Couple yellow lights flash when I see that.

1: In ancient NV cards maybe like the Geforce 6, IIRC (and that’s a big “if” – this is ancient history) the COLOR varying was subject to 8-bit quantization. Use a TEXCOORD or generic vertex attribute instead.

2: The COLOR varying is subject to clamping. Use a TEXCOORD or a generic vertex attribute instead. Or use glClampColor to tune this behavior.

3: What type are you using to populate this vertex attribute, and does it have the desired precision? (or are you computing it in the vertex shader?)

Hi Dark Photon, thanks for the reply to my (rather convoluted) post.

Color is populated by glColorPointer(4, GL_FLOAT, …)
I don’t use a vertex shader currently.

I was just using color.a because it is convenient - the depth shader is only for rendering opaque objects, so a is going to be 1.0 anyways.

I see your point though, if you tell OpenGL it’s a color, it seems reasonable that the driver would quantize it under the assumption that it’s a color.

I will try using a generic float attribute instead and report back if that fixed the problem!

PS: Not important if the above suggestion works, but I realized a better way to visualize the z-buffer is by reading it out as an image with glReadPixels. Here are some (linearized) depth images from those different cards:

Geforce 9800 GT (correct)

Radeon X1650 Pro:

Geforce 6800 GT:

I’m happy to report that storing the depth value in a regular attribute has completely solved the problem!

Thanks again for your help Dark Photon :slight_smile:

No problem! Glad it worked!