multiple glReadPixels


I try to make a program that use opengl to generate a series of images, and save them to files. I use ImageMagick for image manipulation and glut or sdl for opengl implementation.

I have tried with glut to put all the code in the display function. That is to say :

void displayFunc(void)
  set View and Viewport;
  generate image `current`( with previously initialized display lists );
  glReadPixels(...); (on Back_buffer)
  some ImageMagick code to save the file;

But I experience instability. On the first pass, glReadPixels give always a black image, the buffer seems not be initialized. Sometimes the second pass to.

The others pass give sometimes corrupt images.

ImageMagick code is between image generation and current++ but it is sometimes not executed ( or return untimely ).

I think the problem come from the fact that glut use a multi thread environment so I can’t call MagickWand functions without mutex protection. Or maybe glReadPixel read pixel whereas the frame is not finished?

So can someone explain me how and where call glReadPixels. It is needed to implement mutex for : my variable current, access to buffer and save to file?. How can be sure that the frame is finished before reading? Where put my ImageMagick code?

(Excuse me for my English)

  1. glut is not multithreaded

  2. glReadPixels read the finished frame, so it will block and wait until all previous GL commands have been processed. There is still the possibility of a bug in the driver, you can add glFinish() right before readpixels to see if it makes a difference, but it is probably something else.

  3. the man page on glReadPixels says that “Values for pixels that lie outside the window connected to the current GL context are undefined.”
    Maybe you have a problem with this, especially on the first frame ?
    And are you sure you do your double buffering correctly ?
    Something like :

init double buffered window
draw (by default on back buffer)
read (by default on back buffer)
goto LOOP

  1. what is you card/OS/driver versions ?

Thanks for your answer!

I am under gentoo linux with a ati radeon graphic card using kernel driver (radeon).

I have already tried to add glFinish() and glFlush().
My viewport is into the windows. I read only viewport’s pixels.
Yes my windows is double buffered.

I have put my code in my glutDisplayFunc function. So I don’t loop has you propose because glut do it for me.

I have tried to read pixels on user input (when he push the z key) and it works!

Very strange…

Yeah sorry forget about the loop, I am more used to loop-style rendering, instead of event-driven like glut. In that case, the loop block is simply in the display function.
Make sure you do swapbuffers after the readpixels and before the end of display func.

EDIT: you did not say your kernel, OS and xorg version ?


No problem with the swapBuffer placement.

My kernel is gentoo-sources-2.6.28-r5.
My xorg version is 7.2.
My os? it’s gentoo linux with gnome display manager.


Yes you’re right!! It was a swap buffer placement problem!
1)I swapped my buffer two times per display.
2)One swap was before glReadPixels.

The second problem come from that glut process all keyboard events before return to display func. So by pressing keys quickly I increment “current” multiple times without going in display function.

Many thanks!!