Depth Buffer and hardware problems

i don’t know what i’m doing wrong, but it sure must be big. when i render polys to the screen, the depth buffer seems to screw up. sections of polys appear behind others when they should be in front and visa versa. i’ve tried running in software emulation mode and it was painfully slow yet no jaggies or misplaced poly sections (odd that it works in software yet not in hardware). i’ve also ran it on varied computers with different hardware cards (with the latest drivers) and the depth buffer seems horrable. when i disable the depth buffer, rendering is fine (granted everything is not depth buffered, but no more jaggies or sections of poly’s appearing infront of others. it’s all or nothing). so my question is what am i doing?? how can i fix this? i’m out of control!

oh yeah, one more thing: it works on one computer in hardware acceleration mode, but they did some funky stuff to their computer and i do not think that is nessesary to get depth buffering working. especially considering i’ve ran other gl apps and they work just fine.

thanks for your help and time

seems like a precision/distribution problem.

the microsoft GDI generic implementation uses a depth buffer with 32 bits of precision.
the nvidia tnt card use 24 bits instead. this can cause some z-fight.
i ran test applications on the matrox g200 card, wich can either make use of 24 and 32 bits depth buffer, and with the lower setting many z-fights occoured.

a basic consideration with depth buffering is to keep the near clipping plane as far as possible, and the relative depth of the far clipping plane as little as possible.

this is useful to optimize the distribution of depth values into the buffer.

try to adjust the near and far clipping planes z when you call gluPerspective()


most excellent! that was the problem exactly. the near clip plane fixed it all. just needed to move it back a bit. you rule! thanks for you help, i was quite worried about how i was gonna fix that.