GL Depth

This may seem as a odd question, but i thought it as somewhat important…
OpenGL has its own depth buffer, but would it be better(faster) to turn depth buffer off and draw polygons according to precompiled renderlists(similiar to bsp format i think)… or to calculate a renderlist during runtime?
Or might this depend on how good of a renderlist sub was designed?


it MIGHT be. There are too many issues to consider when thinking about suchs things.


  • how quickly can you get a list of sorted polys. will sorting, or traversing a more complicated structure (?) be less efficient than using h/w z buffering. is this true for all h/w configs?
  • if you give polys in dpeth order, you can’t take advantgae of minimising texture changes. Is the penalty iniflicted with more texture changes less than the penalty for z buffer?

and plenty of more questions. the answer, as always, is NFI.
=) it depends on the system. Once upon a time when there were no graphics h/w then it would be a safe bet that z buffering would suck. But, now, probably not. some h/w can do some funky operations for free. FREE, i tell you. (The onyx, for instance, can texture a triangle with NO penalty=)


Many 3D cards have higher fillrate with disabled depth buffering.
The difference may be as low as several percents or may be very significant.
For example, for 3Dlabs Permedia2:
83Mpix/sec (without depth buffering)
43Mpix/sec (depth buffered)

And some 3D cards prefer front-to-back order when depth buffering is enabled.
I remember it about Rendition V1000, and now looks like ATI Radeon really like it.

But polygon sorting may lead to frequent texture/whatever changes.

You can create sorted by texture “renderlist” when you walk BSP tree.
And you can create additional sorted by depth list with all transparent polygons.

The renderlist may contain opaque polygons, sorted by texture and F->B for each texture, and transparent polygons, sorted B->F.

[This message has been edited by Serge K (edited 09-15-2000).]