Using glReadPixels() for bitmap screenshot

Ahoy everyone!

For a week now I’ve been pounding my head against my keyboard because of my lack in knowledge over OpenGl.

My main issue is about conceiving a easy-to-use library-less export function of a rendered OpenGl scene into a bitmap.

So far I’ve made a toy model which create an quick rendition of a scene and tries to export it to a BMP file. I think that I’ve managed to understand how bitmap headers work and my image software (imageMagick) seems happy with it.

My main issue is about the content of the generated image. It’s f***ing empty!!! No matter what parameters I give to glReadPixels().

Could you tell me where I’ll find a canonical way to proceeed?
Even the Redbook gave nothing clear :frowning:

It’s sad to see how hard it could be to take a screenshot…

Thanks, Z.

Hi!
I’ve managed to do it in SDL.
Here’s the basic code I’m using.
Hope it helps :slight_smile:

SDL_Surface * temp = SDL_CreateRGBSurface(SDL_SWSURFACE, screen_width, screen_height, 24, 0x000000FF, 0x0000FF00, 0x00FF0000, 0);

char * pixels = new char [3 * screen_width * screen_height];

glReadPixels(0, 0, screen_width, screen_height, GL_RGB, GL_UNSIGNED_BYTE, pixels);

for (int i = 0 ; i < screen_height ; i++)
	std::memcpy( ((char *) temp->pixels) + temp->pitch * i, pixels + 3 * screen_width * (screen_height-i - 1), screen_width*3 );

delete [] pixels;

SDL_SaveBMP(temp, "pic.bmp");

SDL_FreeSurface(temp);

It’s not the most efficient way, but it does work :slight_smile:

If glReadPixels does not return data, are sure you try to read to a non-obscured window ?
http://www.opengl.org/resources/faq/technical/rasterization.htm#rast0070

Thank you for you answers.

@yesu666: I’m trying to manage without using a external library.

@zbuffer: This function is going to be used in a batch environment, X-less per se.
Do you think that, in that particular case, trying to make a screenshot of a already rendered but not displayed scene is doable?

No, because by definition, nothing gets “already rendered” on a X-less environment.

You need a very special setup for that to work. Can you actually create a GL context on a X-less server ? And what do you get as result of these strings :
glGetString(GL_VENDOR);
glGetString(GL_RENDERER);
glGetString(GL_VERSION);

Thank you again for you answer, I really do appreciate it.
The creation of a GL context failed. (“Failed to open-display”)
Those three variables returned “empty”…

Could you please tell me where to find that “very special setup for that to work” of clues about it.

Off-screen Mesa is the only way I know to be able to render without X:
http://www.mesa3d.org/osmesa.html

To my knowledge there is only VirtualGL which can offer this functionality, with hardware acceleration :
http://www.virtualgl.org/About/Introduction

(and using OpenGL for software rendering has not a lot of interest)

Can you describe how OpenGL fits into your project ?

Thank you both for your interest in my problem.

OpenGL is used in order to render a complex architectural structure. When used in interactive mode -on user’s computer- the main program opens a OpenGL window and waits for it to be closed by the user hence validating architecture’s design to be computed locally. For bigger structures, we launch the same program using clusters -thus batch, MPI, OpenMP and so on. Therefore, when this program comes to the point it’s supposed to open the OpenGL window, it fails. The code is currently trapping the exception and resumes without having user’s feedback on the design.

Because of the main program architecture, it is very difficult to refactor the way OpenGL is summoned. So I’ve end to the solution of having a “screenshot” of the said scene to be generated and reviewed by the user. If he consider the design to be a failure, he just asks the batch system to kill the running program.

I hope it’s clear…

As I said I’d rather proceed without having an external solution. I’ll look with the off-screen rendition protocol of mesa as we already use it.

But if you got other ideas, I’ll listen to them :smiley:

If your system supports OpenGL 3.0+, it should be possible to create OpenGL contexts without a default framebuffer. You have to create your own “behind-the-scenes” application-created framebuffera with a framebuffer object (“FBO”), and use glReadPixels to read from it.

I’ve never used X, but I found the extension spec that added this functionality (there is a similar extension for Windows).
http://www.opengl.org/registry/specs/ARB/glx_create_context.txt

For bigger structures, we launch the same program using clusters

Ok but is there some 3D GPU hardware on these clusters ? And which one ?

  • if yes, go with VirtualGL, simply run the GL app on the distant server, while the user sees the result thanks to the VirtualGL connection. It is really the typical use case.

  • if not, you will need maybe 10x the CPU power of a desktop pc having 3D video card to make up for the no-GPU software OpenGL. In this case, go with osmesa.

Thank you very much for your support.
Because of several drawbacks, we’ve decided to delay the integration of those tools on our system. Nevertheless, we do hope to find a solution in the next 6 months.

I’ll tell you as soon as we find something relevant.

So long and thanks for all the tips.

Z

Hi all,

I wasn’t sure if I should start a new topic or continue here with this one. I have a very similar situation except that I figured out a way to generate the bitmap image but I would like to know if there is a more efficient way of doing so. In my application (Windows Form with VB.NET), I have a bunch of pictureboxes and I would like to update these pictureboxes with the bitmap image each time the opengl window is rendered (change in settings or rotation, etc). Each picturebox contains different opengl contents so I am forced to clear the opengl window, draw the data, read the pixels and then convert it to a bitmap. This process is not too bad if I have one picturebox but when I have many pictureboxes, this process becomes relatively slow. Is there a more efficient way of capturing the screenshot of the opengl window?