My program always runs at the current desktop resolution by default. My desktop is 1600x1200x32. If I just run the program, it gives me 67fps. Now, if I switch to 800x600, and switch back (using ::ChangeDisplaySettingsEx(0, 0, 0, 0, 0); ), then the framerate jumps up to 82fps!! This is more than 20% increase in framerate!!! Note that I run at the same 1600x1200x32 resolution in both cases!
Maybe there is different monitor refresh rate used when you use ChangeDisplaySettingsEx() or maybe the switch have temporarily made more free space in video memory so textures or vbos you program is using can be all stored into video memory.
What do you mean with “I run at the same 1600x1200x32 resolution in both cases!”? thant your window is in that size even if the screen is in 800*600? in that case you should be aware of that any pixel that isnt showing isnt rendered, so those pixels are discarded directly after the rasterizer and you doenst have to fetch textures or run shaders for them.
If all this is in a PBuffer or an FBO then the former explanation seems ok.
I don’t sync to refresh rate, so this shouldn’t make any difference
but it does
eg run doom3(or another app) with your monitor at 120hz + then run it at 60hz.
the 120 should be slower than the 60hz
(im sure a few benchmarking sites employ this tweak for a few dollars )
In your app, retrieve the refresh rate before and after the switches in res. I’m betting the refresh when you switch back to high res will be lower (prolly 60Hz) and will go along with what zed said.
Why should/could the 120 Hz run be slower than the 60 Hz run?
Because the memory on your graphics card has to be read more frequently for showing a picture on your monitor:
1600x1200x32@120Hz => 879MByte/s
1600x1200x32@60Hz => 439MByte/s
The memory bandwidth is limited and can only be used once
This explanation is very interesting, but it must be something else in this case:
switch back: 1600x1200x32@60
BTW: I remember in the “old days” there was a difference between fullscreen app, and windowed app. I never understood this, as the fullscreen window is just a popup window without frame that occupies the whole screen. Is it possible that the driver still detects this case (but only when switching resolution) and is able to perform better??
I forgot to mention that I’m using two monitors, both at 1600x1200, but I only use single screen acceleration, and always run my app on the primary display.
Now I’ve tried to disable the second monitor alltogeher, and guess what! The difference in fps did not disappear, instead it increased even more!!
Hmm…I’m declaring this as an X-File.
So I’m wondering, what is your program doing BTW?
For unexplainable differences I often add high-resolution timers (RDTSC on IA32) to my code around suspect areas. Candidates in this case could be (an added) glFinish just before swapping, and then the actual SwapBuffers call.
(Windows-specific stuff follows)
I just the other day found a problem where I intermittently would get roughly 3 seconds (!) for the Finish, and then another 3 seconds for the Swap. 6.7 seconds in total was the worst I experienced. Turned out the ATI driver was effectively busy-waiting for… something (seemingly repeatably calling Sleep(0), waiting for I don’t know what).
My >100fps to 0.15fps beats your 60 to 82, and I didn’t even switch screen resolution.
Btw, if really hunting speed, profiling has displayed that the (likely completely redundant) LoadLibrary+GetProcAddress in glSwapBuffers in opengl32.dll is eating quite a lot of the time of the call. If on Windows, and no GDI is involved, wglSwapBuffers can save hundreds of thousands of CPU cycles - for each call.