(Sorry, I know it’s a bit OT)
I have to deal with the big problem that the ATI/NVIDIA cards I know about do not support a 32Bit depth buffer (exluding stencil buffer).
Are there any tricks to get higher precision at higher view ranges with the available 24Bit depth buffer, except moving the clipping planes?
There is no general W-buffering implementation AFAIK, right?
And sorting geometry is out of question too.
Is it still true that you can run in depth buffer problems if you calculate your own projection matrix instead of using glFrustum?
Anyway, I really don’t understand that we get stuff like full precision fragment programs but are stuck at 10 year old 24Bit depth buffer precision maximum!