I’ve got a number of calculations going on off-screen, these are subsequently read from the pBuffer using glReadPixels. I set up the pixelformat like this:
int pf_attr[] =
{
WGL_SUPPORT_OPENGL_ARB, TRUE,
WGL_DRAW_TO_PBUFFER_ARB, TRUE,
WGL_BIND_TO_TEXTURE_RGBA_ARB, TRUE,
WGL_RED_BITS_ARB, 32,
WGL_GREEN_BITS_ARB, 32,
WGL_BLUE_BITS_ARB, 32,
WGL_ALPHA_BITS_ARB, 32,
WGL_DEPTH_BITS_ARB, 16,
WGL_DOUBLE_BUFFER_ARB, FALSE,
0
};
wglChoosePixelFormatARB( g_hDC,(const int*)pf_attr, NULL, 1, &pixelFormat, &count);
This works fine on most recent ATI cards, but no Nvidia(Gf5950\Gf6800\Quadro w. most recent drivers) will provide me with a pixelformat with 128bpp! I’ve also tried:
… WGL_COLOR_BITS_EXT, 128, …
Is it even possible getting high-precision pbuffers on an Nvidia? (RealTech’s lovely OGL extensionsviewer says that the Nvidia cards I’ve tried don’t have 128bpp pixelformats but that can’t be true!?)