We need multiple simultaneous depth tests!

…posted as a result of this thread:
http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/005884.html

Unless something obvious was overlooked, there appears to be a number of important problems in 3D graphics that can be solved with multiple simultaneous depth tests. Order-independent transparency is a MAJOR one and I have yet to see a way of doing it using plain OpenGL. Any other problems that involve “depth-peeling” are much harder than they need to be.

I would love it if someone could correct me on these issues. If not, they would make a nice addition for 2.0.

i believe to do so would require the redesigning of all graphics cards out there, also how many depths do u wish to save? eg with alpha blended particle systems there can become a lot of depths overlap very quickly

Originally posted by zed:
i believe to do so would require the redesigning of all graphics cards out there, also how many depths do u wish to save? eg with alpha blended particle systems there can become a lot of depths overlap very quickly

At a glance, it seems that many of the new features in 2.0 will require some reworking on the part of card manufacturers and driver developers. Besides, according to this article, we only need two depth buffers (for alpha blending anyway): http://developer.nvidia.com/docs/IO/1316/ATT/order_independent_transparency.pdf

The storage requirements for “brute force” alpha blending strike me as being pretty insane. It looks like there is an upper limit somewhere around 5 billion megabytes for a 32-bit 640x480 “alpha buffer”. That’s why we need an efficient way of approximating it.

There are reasons why one may want to switch to a tile-based rendering architecture. Order-independent transparency comes with the hardware.