Z-Buffer flickering? But Why?

Hi,

I’m loading a 3DS model in my app. When it is displayed there a “black interferences” on the model. It seems like there is a problem with the Z-Buffer, but actually it can’t be it, because it even occures if I load a plane consisting of 2 polygons.
I have a Geforce4 Mobil and a Geforce2 card. The problem occures on both of this cards.
Please if any suggestions exist tell me.
Thanks alot!
Greets,

Martin

Can you post a pic?

Kevin

No, sorry I mailed it to you. Hope this is okay with you.
I just figured out this flickering does not appear on TNT cards. Now I’m completely confused.

Greets,

Martin

Martin, I have taken a look at your pics. What I think I am seeing is an effect called z-fighting. This is where you have coplanar polygons that are very close together, which results in some fragments of the ‘far’ polygon intruding on some areas of the ‘near’ polygon. This is due to inaccuracies in the z-buffer.

In a hand-waving fashion you should be able to verify this by shortening the distance between the camera and the object. As this distance decreases, the inaccuracies should decrease also. [The converse is true.]

There are many ways to counteract this phenomenon. Here are just a few: -

  1. Ensure that your models are created in such a way as to minimise (or indeed eliminate) coplanar polygons.

  2. Ensure that your z-buffer is working in high precision i.e. greater than 16-bit if possible.

  3. Increase the distance to your near clip plane.

  4. Decrease the distance to your far-clip plane. Please note that 3 is far more effective than 4.

If you have any further questions please don’t hesitate,
Kevin

Hi,

how can I figure out on what setting my z-buffer depth is? I didn’t know the proper word for this effect, so I called it flickering (that’s how we call it in Germany! ;-)).
There are only a few polygons that are coplanar, most are not! It is very freaky because I tried it on a TNT card and there is no problem with that. But it is on all geforce cards. Is there somewhere a hiden setting which I can change (software??)? This really makes me go wild because I’m running towards a deadline and have 2 projects at the same time.
Do you have any links for me on this topic?
Thanks so far!
Greets,

Martin

Hi,

thanks alot for your hints! They really saved my butt. But I still don’t understand why a higher level card like the GeForce has severe problems with the depth buffer and the TNT does not! I believe this will stay a mystery for me!

Greets,

Martin

P.S.: If I can help you out with some thing drop me a line.

I expect its just the degree of accuracy, or rather the depth, of the z-buffer.

Hi Gavin,

Okay accuracy and stuff makes sense, but there must be a way to adjust does settings. I couldn’t find anything on this topic in the OGL documentation. And on the NVidia settings stuff I couldn’t even find a documentation.
Neither the driver settings nor the NV support page gave me any solution.
Any ideas or suggestions??

Greets,

Martin

I am just guessing. Can you find out what depth your z-buffer is on each of the cards.

Are you working with Windows? If so, pixel format stuff is set with ChoosePixelFormat and SetPixelFormat calls ( specs ); getting info on pixel format with GetPixelFormat and DescribePixelFormat. If you are using GLUT (which wraps system specific stuff trying to make the API system-independent), then you are probably looking at glutInit and glutInitDisplayMode (I don’t know paramaters), and glutGet(GLUT_WINDOW_DEPTH_SIZE). ( info )

As per geforces, they are newer and faster, but as always, that’s a performance/visual quality trade off.

Not sure - but to eliminate Z fighting, you might try to bump up/down the Z values of the incoming vertices, thereby forcing one of the coplanar polygons out of “fighting” (but then you need to detect coplanar polygons - which requires an additional pass through your model, and is n^2 complexity in general (need to check each poly against every other)). You might also try using a different depth fuction (glDepthFunc call) - try GL_LESS, GL_LEQUAL. Theoretically though, GL_LESS is the default - which means that only if the Z value of the incoming vertex is LESS than Z value of any other object, the test passes and the corresponding primitive is displayed. But in practice, finite precision blows it.

You might also try inverting the Z buffer (glDepthRange(1.0f, 0.0f)) and then using GL_GREATER or GL_GEQUAL depth function - maybe on some system it would yield better results. But then forget about MiniGL.

[This message has been edited by Pa3PyX (edited 08-19-2002).]

Hi,

Thanks alot for all your replies! You really pointed the thing out for me.

Greets,

Martin

You can also use the stencil buffer to remove Z-fighting in some instances.

what about glPolygonOffset()???
http://www.opengl.org/developers/faqs/technical/polygonoffset.htm