Shaders and ReadPixels

Also:
again im on a 6800 ultra
driver: oh, just see my previous post.

glReadPixels should care less how I generated the pixels in the frame buffer, from my understanding.

However, it seems that after I render a scene using a shader, the glReadPixels call drops in performance significantly.

anyone know what the deal is with that???

Basic GPU functionality.

Just because you sent a frame of data doesn’t mean that the card is finished with it. Likely, much of it is sitting in a buffer somewhere while the GPU is working on some amount of data. When the GPU gets freed up, it uploads some more data. This permits the card to run asynchronously.

Your call to glReadPixels forces the driver to wait until the scene is fully rendered before reading the pixels. Otherwise, you will get a half-rendered scene.

Try waiting and doing something else useful, then call glReadPixels later when the GPU might be finished.

Won’t a glFinish block until everything has been deposited into the frame buffer though?

a glReadPixels should then just DMA the frame buffer contents into system memory.

In both instances (rendering with and without a shader)I issue a glFinish.
This is how I’m doing my comparison.

Oh, so the “with a shader” part was what was apparently causing the slowdown? So glReadPixels works (relatively) OK if you’re not using a shader?

Hmmm. That’s an odd thing. Though, I would point out that your drivers for the 6800 are probably still betas (or perhaps not even designed for the 6800 if you’re using public nVidia drivers). As such, they could have all kinds of odd issues.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.