Depth buffer precision

Hi all,

I have a big problem I can’t understand.

I am rendering a simple cube using this function :


gluPerspective(45, (float)m_iWidth / (float)m_iHeight, 1.0f, 100);

glLoadIdentity ();

glTranslatef(0.0f,0.0f,-z);						// Translate Into/Out Of The Screen By z

glRotatef(xrot,1.0f,0.0f,0.0f);						// Rotate On The X Axis By xrot
glRotatef(yrot,0.0f,1.0f,0.0f);						// Rotate On The Y Axis By yrot

drawCube(1, -1);


I use a shader to change the fragment colors by its depth.

The vertex shader is:

void main()
gl_FrontColor = gl_Color;
gl_Position = ftransform();

and the fragment shader:

void main()
gl_FragColor = vec4(gl_FragCoord.z, gl_FragCoord.z, gl_FragCoord.z, 1);

The problem is when I zoom in direction of the cube I obtain big artifacts (little bands).

The normal display (where you an also see the problem even if not easy) is shown here:

Finally, the problem is here:

FYI, this only happens when I rotate the camera.

I hope you’ll be able to help me.

Thank you.

Where are your artifacts? I simply don’t see anything on the 2nd picture.
You mean that, just after running you application, you see a correct thing and then, rotating the camera, you see some bands?

I am talking about the diagonal lines (also in the first image). I agree it’s complicated to see, but it’s here…

AAh ok! I am using another screen and I can see it! (this CRT is crap).

So If I understand, the 2nd picture is a zoom on a cube polygon?

I think it is normal if your depth buffer precision is low (16 bits). It is the same when you encode a gray scale with a small color depth.

That’s just the 8 bit per channel precision of your framebuffer. Open the image in Photoshop/GIMP/etc. and use the color picker tool, you’ll see that there is only one bit difference between each of the “bands”.

Ok I see, but my depth buffer is 24 bits precision, it is not enough ?

Also, I want to use the depth buffer to perform some depth peeling and these lines create some big trouble… How can I avoid this problem ?

Thank you.

No your depth buffer precision is good. Xmas is right, your problem is about the framebuffer color depth, not the depth buffer precision.

I don’t know if it is possible, but the best way would be reading the depth value directly in the shader that computes depth peeling, if there is one, to avoid the data lost when writing in the depth buffer. Another method consists in using a fbo and write your depth data into a floating point texture (32 bits if you don’t want any lost at all).

What you mean is that using glCopyTexImage2D is not good because it will only read from the framebuffer, and thus, there will be the problem, which is why I have a problem in the dept peeling since I use glCopyTexImage2D to get the depth buffer.

I don’t know at all FBO but if you tell me for sure this can be the solution, I am going to investigate it…

Thank you very much for your help, I have spent several days trying to understand it, unsuccessfully.

Thanks Xmas and dletozeun.

You can also encode the depth in a 8bit fixed point texture using some bits shifting. It is a little bit more complicated but it saves memory.

EDIT: And with this las method you can use glCopyTexImage2D.

Here is an example:

You shouldn’t store the depth into a color rendertarget, use a depth texture instead.

oc2k1, to fill the depth texture, I normally need to use glCopyTexImage2d, right ?

oc2k1, to fill the depth texture, I normally need to use glCopyTexImage2d, right ?

If you can’t usee framebuffer objects, yes you can do this and then give it to the depth peeling shader.
If your hardwre support fbo (GL_EXT_framebuffer_object extension), you can create a depth texture and then attach it to the framebuffer depth attachment point.

you can find a quick tutorial about how to render to texture with fbo, here:

Thank you, this is exactly what I’m doing, reading these articles… :slight_smile: