of course this question is not directly related to opengl itself, but: why does my nvidia quadro fx 3500 card support only 24 bit depth buffer? having hundreds of MB onboard memory, is there no space for 8 more bits? would performance be affected if 32 bits were used? or does everybody consider 24 bits to be sufficient?
(it is not that i am too unhappy with 24 bit; but i am curious: everything gets bigger- why not the depth buffer?)
As you say, adding 8 bits to the depth buffer isn’t going to change much. In fact, I assume the depth buffer already is 32 bits to make the pixels properly aligned. Also heard it’s not uncommon to put the 8-bit stencil buffer in between the depth bits. So you really have an interleaved depth and stencil buffer.
But to your question; why not make depth buffer 32 bits (in the absence of a stencil buffer) then? I believe it’s about precision. If the internal calculations are performed using standard 32-bit IEEE floating point values, you have 23 bits mantissa with an implicit leading 1. That gives you a total of 24 significant bits. In that case, making the depth buffer 32 bits won’t change anything, because you only make the calculations wit 24 bits of precision.
Don’t know anything about the WGL extension, but NV is supported on GeForce 8 AFAIK.