Another Readpixels Topix (not Optimization)

In my app, I run a mode where I run through my render loop and call readpixels to the backbuffer after each render (after the flush and finish calls). I am having an issue with certain NVidia drivers (after 29.42) where it does not seem to update the buffer if there is a dialog window over my context. I assume this is some form of NVIDIA optimization.
Does anyone know of a way ensuring that the buffer gets updated (beyond ensuring no window is above it)?

I don’t think this is an Nvidia specific optimization, instead it is something pretty common. I solved this problem by using a pbuffer instead of the backbuffer, I suspect there are no other good solutions.


i know this issue. this seems to be some kind of pixel-optimization. it does not affect the rendering of polygons. but glClear and other pixel-calls.
so if you don’t want to use pbuffers, you must draw everyting using textured quads.

[This message has been edited by AdrianD (edited 06-06-2003).]

Originally posted by epajarre:
I solved this problem by using a pbuffer instead of the backbuffer

I had that same idea, however the app is putting the memory into a buffer which is then being transfered out so it needs to be as close to real time as possible. Using the PBuffer seems to be slower then the main context.
I suspect that you are correct and that it is the best solution.

[This message has been edited by Squad (edited 06-06-2003).]

Reading anything out of an occluded window isn’t a safe thing to be doing. In fact, the GL spec specifically says that this is implementation dependant. From section 4.1.1: “Pixel Ownership Test”:

“The first test is to determine if the pixel at location (xw; yw) in the framebuffer
is currently owned by the GL (more precisely, by this GL context). If it is not,
the window system decides the fate the incoming fragment. Possible results are
that the fragment is discarded or that some subset of the subsequent per-fragment
operations are applied to the fragment. This test allows the window system to
control the GL’s behavior, for instance, when a GL window is obscured.”

Rendering to a pbuffer might (unfortunately) be slower, but it’s much safer, and you aren’t likely to get hit with implementation-specific behavior.

– Ben

edit: formatting

[This message has been edited by bashbaug (edited 06-06-2003).]

I can attest to the “interesting” behavior you get from different drivers when putting GDI windows on top of your viewport, and then trying to read back. We’re using pbuffers, and see not much of a slowdown (although we aren’t extremely heavy users).

Note that the OpenGL/WGL interface seems to define ALL framebuffer pixels as being undefined when GDI windows intersect the viewport, so I don’t think there’s much you can do except cope :slight_smile: