I was converting my app from GLUT to Windows code and noticed that the z-buffer precision was a lot worse full screen(using WS_POPUP) then windowed or GLUT full screen. I just wondered whether any one else had noticed this or knew why it was. If I use WS_POPUPWINDOW the zbuffer works fine again. I dont think its something people would normally notice but my app is very zbuffer sensitive as I have the near and far planes set a long way apart for good reason.
If you used glutFullscreen(), I can think of one reason why you loose precision when doing pure Win32 calls. glutFullscreen is just a large window covering the entire screen. It does not alter the displaysettings, and therefore the window will have the same colordepth as the desktop. If your desktop is running 24 or 32 bit colors, you might get a 24 or 32 bit depthbuffer. When you use WS_POPUP, I also assume you change the displaysettings to make it fullscreen, you might change the colordepth to 16 bit, and then you might get a 16 bit depthbuffer. Try force the new displaysettings into 24 or 32 bit colordepth and see what happens.
On NVIDIA hardware, you have to use 16 bit depthbuffer with 16 bit colordepth, and 24 bit depthbuffer with 32 bit colordepth. Otherwise you won’t get much out of your card.
Thanks for the reply, forcing my app to 24 and 32 bit colour depth doesnt make any difference. Its weird the way POPUP behaves differently to POPUPWINDOW. It looks like POPUP gives me 16 bit zbuffer and POPUPWINDOW gives me 24. Im using a Geforce2 with detonator 12’s. I will test it on my work PC, that should give me a clue where the problem lies.
On older NVIDIA hardware (GF2 GTS and earlier), you need to match 16-bit color with 16-bit Z and 32-bit color with 24-bit Z/8-bit stencil.
On GF2 MX and newer, you can use 32-bit color with 16-bit Z.
On GF3, you can use all four combinations.