Trouble with color depth!!!

Hi All!
I have some trouble with my app.
When I try to run it in 16bit color (I simply change it in Disp. Properties->Settings… )
it looks very slow (about 1.7 fps), but if I run it in 32bit color it’s ok (~33fps ).
My program uses MFC.
Maybe I do something wrong with PIXELFORMATDESCRIPTOR?
I use this:

static PIXELFORMATDESCRIPTOR pfd =
{
sizeof(PIXELFORMATDESCRIPTOR),
1,
PFD_DRAW_TO_WINDOW |
PFD_SUPPORT_OPENGL |
PFD_DOUBLEBUFFER|
PFD_STEREO_DONTCARE,
PFD_TYPE_RGBA,
16,
0, 0, 0, 0, 0, 0,
0,
0,
0, 0, 0, 0,
16,
16,
0,
PFD_MAIN_PLANE,
0,
0, 0, 0
};

Is it after all possible to use 16 bit zbuffer and 16bit stencil buffer in 16bit color resolution? My TNT wasn’t able to do this. Also q3 always needed color resolution set to 32bit in order to use stencil buffering.

I used to have that problem too.
I found out that if I requested a 24/32bit depth buffer in 16bit mode, it would fall back to MS SW emulation.
So now I just pass a ‘0’ in the depthbuffer field and then I get a 16bit depthbuffer in all modes… at least on my NVIDIA HW.
One day, I’ll write some code to check for colordepth before I set the depthbuffer.

I can also request a 8bit Stencil buffer in 16bit mode and still be getting HW accelleration.
I remember seeing some NVIDIA guy describing which bit combos to use.
16bit color, 16bit depth, 0bit Stencil
32bit color, 24bit depth, 8bit Stencil

Isaack

Previous to the GeForce2 MX, 32-bit color could only be used with 24-bit Z and either 0 or 8 bits of stencil, while 16-bit color requires 16-bit Z and 0 bits of stencil (we also support 8 for 16-bit color, but if you use those 8 it’s unaccelerated).

GeForce2 MX adds support for 32-bit color with 16-bit depth. It does not support 16-bit color with 24-bit depth or accelerated stencil.

  • Matt