I’m writing an OpenGL renderer for an old game. Originally, it didn’t use any depth testing at all - just rendered its polys back to front.
I want to use depth testing to break the order dependency on opaque objects, but normal depth testing is per-pixel and I need per-poly to simulate the original algorithm used by the game. (I don’t want to get into why per-pixel won’t work, so please take my word on that.)
To accomplish that, I am just setting gl_FragDepth = depth; in the fragment shader so that the entire polygon gets the same depth. This works exactly as indended… on my own machine. I have a geforce 9800 GT.
I sent a build out for testing on some other cards and the results were a bit surprising.
Radeon HD 4850: Works perfectly.
Radeon X1650 Pro: gl_FragDepth seems to be coarsely quantized.
Geforce 6800 GT: gl_FragDepth appears extremely quantized - only 2 levels visible!
Here’s my test setup:
Camera: znear 1/16, zfar 90
Terrain mesh heightmapped, unit grid size, rendered with the depth shader so that each triangle’s depth is the minimum z of any of its vertices.
After rendering the mesh, I render a view aligned plane (a large quad) at a certain z. The plane is rendered with shaders off (revert to fixed function). Because the plane cuts through the mesh, and the plane is rendered with depth testing ON, the behavior of the depth buffer can be visualized in this way.
I animate the plane from z=-10 to -40 in unit step size. Because the mesh grid is also unit size, as the plane cuts through the mesh, at most one strip of triangles should be revealed in each step of the animation. This is exactly what happens on my geforce 9800 GT.
Video: http://vimeo.com/11773509
But, look what happens on the radeon X1650 Pro:
Video: http://vimeo.com/11773536
Notice that sometimes z changes and no triangles are revealed, sometimes it changes and a big chunk gets revealed.
The Geforce 6800 GT is even worse - there are only 2 z-levels where any visible change occurs at all (!)
I am not doing anything fancy - just using the most basic features of opengl 2. Here is the actual code from my fragment shader (no vertex used shader atm):
uniform sampler2D base;
void main() {
gl_FragColor = texture2D(base, gl_TexCoord[0].st);
gl_FragDepth = gl_Color.a;
};
Any idea why this would work on my card but not on those others? Does the spec allow anything implementation dependent with regards to gl_FragDepth that I need to take into account?