Depth buffer problems (z fighting)

I have a couple of questions on the depth buffer.

First, explanation of what I’m doing. Nothing special really. I have a mesh that I’m using to display a random terrain. Basically x and z are a fixed grid with y being a hieght map. I have done this before on other applications and it has worked correctly. The difference here is the coordiante system is quite large. The range of x and z is around 1 million, but the mesh is quite reasonable 40x40 grid. The problem is it appears that on the edge of the grid the z buffer isn’t working.

Also just to clarify, I use
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LEQUAL)
(I’ve also tried GL_LESS).

So my questions.
What does gl use to normalize the z values? What i’m doing isn’t any different than I’ve done before but I’ve always used a smaller coordinate system. Does GL use the near and far clipping planes to normalize the z value in the buffer? I know that the z buffer can vary between 0 and 1.0.

The effect I see isn’t the case of adjacent pixels overriding each other. It is more a case of triangles that are several grid units apart overwriting each other.

Any thoughts or ideas would be appreicated.

Well,

In answer to my own question.

It turns out the problem was due to what the near and far clipping planes were set to. Specifically the near plane. Apparently GL assigns more z buffer resolution to things near the viewer, which makes sense. Orginally the near and far plane were 10 and 2e6. Most of the bits were being used for the near z, so everything else was clamping at the edge of the grid. By changing the near plane to 200, the zbuffer started working correctly.

My understanding is that OpenGL calculates normalized z-values [0 to 1] by taking the difference in z values for a pixel from the near plane and dividing this by the difference in z-values between the far and near planes.

Would like to way a few things about the depthbuffer here.

You are correct that there is better precision closer near the near clippingplane, and that is because of the way floats works and because it’s a frustum. This is not an OpenGL issue only.
However, when using floats, you have to take care of how you assign your clippingplanes. The bigger the ratio far:near, the more bits or precision you loose. So having a ratio of 2e5 will cause a large amount of bits to be “wasted”. And by increasing the distance to the near plane, the ratio is less, and you loose less bits. I suggest you have a ratio of about 1e3 to 1e4 (if you are using 32-bits depth, if you have 16, you should have a lower ratio).