Stenciling yields slow framerate

I’m testing stenciling using a program from the red book (slightly modified to constantly update the display and show a framerate). On a geForce 256, I’m getting about 6 fps when stenciling is enabled, yet 400-500 fps when stenciling is disabled. Supposedly, stenciling is ‘considered free’ on this card when depth buffering is used. Anyone know what’s up here?

Did you setup the stencilbuffer ?

As per the red book example, I did this:

glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH | GLUT_STENCIL);

In another program (the one I’m testing for), I explicitly create an OpenGL window, using ChoosePixelFormat(). The format returned by DescribePixelFormat() shows a 16 bit depth buffer and an 8 bit stencil buffer are in use. This program behaves the same – super slow when anything uses the stencil buffer.

I haven’t been able to find which depth/stencil formats the geForce supports in hardware. Seems like the stencil reads/writes must be going through software.

What is your color buffer depth? If it is 16 bit, then the card is resorting to a software stencil buffer. In 32 bit color mode, it will use the hardware stencil buffer.

The color buffer depth thing sounds like a good tip.

There’s no way I know of to specify or verify any of these parameters (color/depth/stencil buffer size) through glut.

In my other program, though, PIXELFORMATDESCRIPTOR.cColorBits was set to 16 when calling ChoosePixelFormat(). I tried changing it to 32 (and 24), but the format that’s returned is always 16 (565). Again, this is on a geForce 256. Am I doing something wrong?

ChoosePixelFormat only chooses formats that are compatible with the current display mode. Assuming you are using a fullscreen mode, you must first see what color mode is being used and then switch to the mode you desire. Then enumerate the possible pixel formats and choose and set the one that best suits your needs. See the Windows API functions EnumDisplaySettings and ChangeDisplaySettings for setting the display mode. Instead of using the Windows API you could use DirectDraw to set the video mode.

I believe DFrey got it wrong. The Stencil buffer comes with the depth buffer. If your card support a 32 bits depth buffer than the stencil buffer if free, because it uses 24 bits for the zbuffer and 8 for the stencil.

the last place where you want to lose precision is the color buffer. so a 32bits color buffer is a 32 bits color buffer period.

No Gorg, I’m pretty sure I have it right concerning NVidia cards. When ever the color buffer is 32 bits, the stencil buffer is accelerated because the depth buffer goes up to 24 bits and the remaining 8 bits is used for the stencil buffer. But when using a 16 bit color mode, the depth buffer also falls back to 16 bits, and there are no bits available for the stencil buffer. So in 16 bit color mode, the OpenGL driver resorts to using a software stencil buffer. In every program I’ve made and have used, this has been the case. Try turning on stencil buffer shadows in Quake 3 on a geforce in 16 bit color mode. Then turn them on in 32 bit color mode. You will see this quirk of the NVIDIA cards is all too real.

[This message has been edited by DFrey (edited 08-31-2000).]

oh ok. I get it. I just misunderstood what you said. Sorry.

My desktop res was set to 16 bits (duh!). When I set it to 32, color depth was correctly set to my requested 32 bits, and I got a 24 bit depth buffer and 8 bit stencil buffer, as expected. Stenciling is hardware accelerated, and everything is cool. Thanks for the help!