I’m developing in Windows 7 x64, ATI 4670 GPU, Catalyst 10.5 drivers.
I have a 2D graphics program that is using OpenGL to display a background image with effects rendered over it in a window. I’m using double buffering.
I call glViewport and glOrtho so that a pixel in a displayed texture on a 2D rectangle is equal to a pixel on the screen. That part seems to be working.
My rendering algorithm accumulates a fair number of operations in a display list, calls the list all at once, then does a SwapBuffers to get the data on the screen. All good so far…
My problem is that on entry into the rendering loop I detect whether the window size has changed, and if it has I reissue the glViewport() and glOrtho() calls (immediately, not in the display list). I thought - possibly mistakenly - that this would not affect the front buffer but would simply be preparing a new back buffer that would fit the resized window.
Unfortunately, the glViewport command seems to be resizing the existing data in the front buffer up or down to the new window size, so the result is I see a briefly stretched image, then it is overlaid (after the cycle of rendering is complete) with a properly sized image.
I’m interpreting this as the glViewport command operating immediately and causing the data already displayed to be immediately resized to fit the window (the GPU is very fast at this apparently), THEN the re-rendered data is replacing it. Visually, this looks very nasty as the image gets all jumpy.
What I am trying to accomplish is to have the previously rendered data remain the same size and in the same place until newly rendered data is available to replace it.
How do I stop the stretching?
Have I made an invalid assumption about wanting to set glViewport to match the window size?
Is there a way to have the glViewport command not resize the existing front buffer data?
I’ve not been able to find much info on using OpenGL this way as most applications seem to use a fixed window size (e.g., full screen).
Thanks for any guidance you can provide.