I’ve run across a problem that I can’t figure out and would like to know where the problem lies. The problem is, when the display is in a 32 bit mode, using the stencil buffer causes no appreciable slowdown. Yet if the display is in a 16 bit mode, using the stencil buffer causes my app to suddenly creep along at a whopping 2 FPS. The instant the app stops using the stencil buffer the speed picks back up. Why is the use of the stencil buffer in 16 bit mode causing this slowdown?
Had the same problem, I have a theory that its because when you’re in 32 bit mode you’re using the software driver, which does the stencil buffer properly, but in 16 bit mode you’re using the hardware driver, which has to do the stencilling in software because the hardware doesn’t support it, but doesn’t do it very well, hence the slowness.
But I don’t really know what I’m talking about.
With the TNT you have got the following:
16 Bit Mode: 16 Bit Z-Buffer, no Hardware Stencil Buffer
32 Bit Mode: 24 Bit Z-Buffer, 8 Bit Stencil Buffer
I don´t know why this is the case, but the slowdowns are caused (in 16 Bit) by the software emulation for the Stencil Buffer, because there is no hardware accelerated Stencil Buffer available!
Ah, thanks, I was under the impression the stencil buffer was hardware accelerated in 16 bit mode. I’m gonna have to disable them volume shadows in 16 bit.
Do you happen to know if this is also the case with TNT2, Geforce, or Geforce 2?
The way it generally works is that depth buffer and stencil share an 4-byte word. Since 24 bits is usually enough for depth precision, the other 8 is used for stencil.
In 16bpp mode, there is no easy place to stuff the stencil bits so that they line up nicely on word boundaries.
If you want hardware accelerated stencil, it’s probably best to use 32bpp on NVIDIA products.