Weird glReadPixels behavior?

I’m capturing the front buffer pixels for subsequent use. I find that the image read is not what I see on the screen but is some kind of pre-rendered form of it. The alpha is not opaque and the image is re-lit by the current lighting. This seems most odd. Is this what I should expect? Here is the code I am using and it produces the results I want.

void front_buffer_glReadPixels
(
	int x0, int y0, int width, int length,
	uint8_t *buffer
)
{
	vec3 ads_lighting;
	uint8_t *alpha;
	int z;

	// retrieve current lighting
	get_uniform_variable(GL_FLOAT_VEC3, "ads_lighting", &ads_lighting);

	// turn the light on to capture the screen
	set_uniform_value(GL_FLOAT_VEC3, "ads_lighting", 1.0, 0.0, 0.0);

	glReadBuffer(GL_FRONT);
	glReadPixels
	(
		x0, y0, width, length,
		GL_RGBA, GL_UNSIGNED_BYTE,
		buffer
	);

	// set all alpha to opaque
	for(z = 0, alpha = buffer + 3; z < width * length; z++, alpha += 4)
	{
		*alpha = ALPHA_OPAQUE;
	}

	// restore previous lighting
	set_uniform_variable(GL_FLOAT_VEC3, "ads_lighting", &ads_lighting);
}

It’s hard to give any advice without knowing how the captured data differs from what you expect. However: glReadPixels isn’t affected by the current shader program or any of its state. In fact, it isn’t affected by any state other than framebuffer state and the pixel storage modes (glPixelStorei).

How are you examining the data? Dumping it to an image file? If you’re viewing it within the program, I’d assume that the rendering process is affecting it.

Some of this isn’t too surprising. Reading back pixels from the system framebuffer/window (if even supported) takes a different path through the graphics pipeline. If you don’t both allocate and use destination alpha (the alpha in the system framebuffer) for anything useful, then you shouldn’t try to do anything meaningful with the contents of the alpha channel you read back. Downsampling of MSAA framebuffers and handling of sRGB framebuffers might be performed differently as well. Also on some systems+configs, reading back the system framebuffer isn’t even supported. And even when it is, whether you get anything useful or not can depend on things like the pixel ownership test (window occlusion). In short, what you get when reading back from the window can be very platform specific.

As for the image being re-lit, that seems a bit harder to believe. I could believe that perhaps what you see in the window has had some gamma curve applied at the system/display manager level, whereas what you read back hasn’t.

If you want more control over what gets read back, I would consider instead rendering to a texture in an off-screen FBO. With this rendering, you’re in tighter control of what internal formats are allocated, how rendering is performed, and if/when various steps at the tail-end of the rendering pipeline happen.

I have deduced that the lighting is my problem and NOT a glReadPixels() issue.

It remains, however, that reading the front buffer doesn’t give me what is on the screen. The hardware I have is an Nvidia GPU etc. and I get the same result from an Intel GPU.

The captured image seems to have alpha values that were present when the textures were rendered. The ‘background’ to the captured image seems to be white so the captured image is very, very pale. I have saved the image as a jpeg and also re-rendered it as a texture and the result is the same. If I set the captured image’s alpha to opaque I get what I see on the screen.

Perhaps, naively, I imagined that glReadPixels() would give me exactly what I see on the screen which by definition is fully composited.

I went back to the description of glReadPixels() and it says:

glReadPixels and glReadnPixels return pixel data from the frame buffer.

In general, I understand the frame buffer to be the memory that is scanned and pushed down the wire to the display at the operating frame rate. Thus what I see is what I should get. Clearly not.
The documentation should have a note to the effect that the front buffer is not same as the frame buffer.

And what you see ain’t what you get.

Then the issue is with how you’re displaying the captured data, not the data itself.

What you should see is what’s in the red, green, and blue components of the framebuffer. The alpha component isn’t sent to the monitor. But if you save all four components to a file then load it in an image viewer, that may use the alpha channel to blend the image with a background colour.

The only reason for the system framebuffer to have an alpha channel is if you want to use GL_DST_ALPHA or GL_ONE_MINUS_DST_ALPHA as an argument to glBlendFunc (or similar). It doesn’t affect what’s sent to the monitor. If you just want the “visible” components, either use GL_RGB rather than GL_RGBA (but you may need to set GL_PACK_ALIGNMENT to 1) or simply ignore the alpha component.

Since things are working for me, perhaps I shouldn’t continue this correspondence. :stuck_out_tongue_winking_eye: However, you write:

Then the issue is with how you’re displaying the captured data, not the data itself.

I’m expecting the alpha channel to be opaque and it isn’t. My intuition is defeated. So from my perspective it is the data that has a problem. When I grab the screen data and push it back on the screen as a texture it isn’t what I previously saw on the screen. So I would argue that OpenGL is not consistent with itself. I expected an idempotent operation.

If I chose to just get the RGB I’d have to do a bunch of monkeying around to put it in an RGBA buffer. It is easier, in four lines of code, to make the captured data opaque.

The alpha channel will contain whatever was written to it, or the alpha component of the clear colour for pixels which aren’t touched by any rendered primitive. If you aren’t using a blending function which makes use of destination alpha, there’s no reason to request an alpha channel for the system framebuffer (although it’s possible that you’ll get one anyhow; format requests are … well, requests; various factors may result in the actual format differing from the requested format). If the framebuffer lacks an alpha channel, the alpha values returned by glReadPixels will all be one.

Were you rendering with blending enabled? Because blending explicitly modifies the colour components according to the alpha component. Rendering with blending disabled should result in the framebuffer containing exactly what was emitted by the fragment shader (or by the fixed-function pipeline, but the fact that you’re setting uniforms suggests that isn’t applicable here).

To re-iterate: it appears that the issue isn’t with the contents of the framebuffer or the operation of glReadPixels, but that the combination of reading the framebuffer then rendering that data back to the framebuffer isn’t an identity operation. While it’s possible to set up rendering so that texture values are copied to the framebuffer exactly, that isn’t the only possibility nor the most common one.