This is kind of an advanced newbie issue…
I have been having a heck of a time with a threaded implementation in which a rendering thread is given a parameter block and set off to render into the back buffer of a double-buffered window.
Everything works great except that during window resizing VERY occasionally I found that the GDI (main application) thread was somehow causing the back buffer to be cleared right during rendering (I was right in the middle of a glCallList call in the rendering thread, and only the second half of the rendering would show up).
This was happening with multiple (and very different) nVidia drivers on accelerated systems, and with the ancient GDI Generic 1.1 OpenGL implementation in Windows - but NOT with ATI accelerated systems.
I worked around the problem by protecting the glCallList through SwapBuffer calls with a mutex, making sure the GDI thread can’t issue the DeferWindowPos / EndDeferWindowPos calls until the rendering thread is clear of the rendering and buffer operations. This is actually reasonably acceptable because resizing operations are not really “normal” operation per se, but are an exception with my application. Normally users will resize the window, then interact with the display.
My theory: Changing the window size instantly resizes and clears the buffers, including the back buffer, associated with the window on some systems. This coupling was unexpected.
My question is this:
Is this GDI - buffer coupling a well-known thing?
Documentation seems virtually nonexistent regarding this part of the OpenGL implementation. I guess it makes sense in a way, because the buffers have to become the size of the Window at some point (I had naively thought it would be during glViewport), and it’s interesting that ATI doesn’t seem to be burdened by it.
I’d be grateful for any insight, and especially a pointer to documentation I haven’t been able to find that describes this part of the implementation.