Rendering exact depth buffer values to a quad

Hi everyone,

I’m porting some code written for the desktop using OpenGL 3.3 to mobile using OpenGL ES 3.0. It’s mostly been a straightforward process, except for a portion of the code that uses glReadPixels() with GL_DEPTH_COMPONENT in order to read the depth buffer.

However, glReadPixels() cannot be used with GL_DEPTH_COMPONENT in ES 3.0, and the only workaround I’ve seen mentioned is to use a depth texture, render it to a quad, and then use glReadPixels() on the colour output of this quad rendering. I mostly have this process working. The depth image that gets displayed looks pretty accurate to the naked eye. And the same number of unique depth values get printed in both the direct and “indirect” methods, which makes me feel that precision isn’t to blame. But when I print out the values as text, they do not line up exactly. E.g. I’ll get 0.01844 with the direct method vs 0.0201528 with the indirect method for a given pixel. By testing around with a shader that only outputs single constant values, I’m quite confident that the process of reading the final colour values from the quad is not an issue. I am led to believe that the process is either in the creation of the depth texture or the sampling of it in GLSL.

Code for creating the depth texture and attaching it to the non-quad framebuffer:

glGenTextures(1, &depthTextureID);
glBindTexture(GL_TEXTURE_2D, depthTextureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTextureID, 0);

Then attaching it to the framebuffer where the 3D objects are rendered:

glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTextureID, 0);

C++ code for rendering the quad:

glClear(GL_COLOR_BUFFER_BIT);
depthDisplayShaderProgram->bind();
glBindTexture(GL_TEXTURE_2D, depthTextureID);
glDisable(GL_DEPTH_TEST);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, (GLvoid*)0);

GLSL fragment shader code:

layout(location = 0) out vec4 fragColor;
in vec2 vTexCoords;
uniform sampler2D sampler;
void main(void)
{
float d = texture(sampler, vTexCoords).r;
fragColor = vec4(d, d, d, 1.0f);
}

I’m not sure what might account for the discrepancy in values I’m seeing. Is there some setting I should still change? Is the mismatch simply unavoidable? Would very much appreciate any suggestions.

Update:
If I use glClearDepthf to set the depth buffer/texture to some constant value, then the result is identical. So maybe it is a problem with reading the colour values, but said problem just doesn’t reveal itself when all of the values are constant? I’ll share the code relevant for reading the colour output if it’s requested, but it’s a bit on the long side, so I’m hesitant to do so unless the above definitely looks fine.

That’s a fairly substantial error (~1/584), which indicates a loss of precision. What’s the format of the colour buffer to which these values are being written?

Also, I don’t see any precision qualifiers. These are ignored in desktop OpenGL, which always uses highp, but they matter for ES. Note that sampler2D defaults to lowp; try

uniform highp sampler2D sampler;

I’m currently testing out the code on desktop, where it’s easier to debug, so the precision qualifiers should not matter. But thanks for letting me know about them! It saves me from a headache I would have had once I run this particular portion on my phone again.

The code for creating the texture that I write in the quad rendering stage is:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32I, width, height, 0, GL_RGBA_INTEGER, GL_INT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, quadColourTexture, 0);

The glReadPixels command I use to read this texture would be:

glReadBuffer(GL_COLOR_ATTACHMENT0);
glReadPixels(0, 0, mat.cols, mat.rows, GL_RGBA_INTEGER, GL_INT, mat.data);

Where mat is an OpenCV matrix. It feels like precision shouldn’t be an issue, as I am using 32 bits for everything, I get the correct number of unique depth values (whereas I get too many unique values if I use GL_DEPTH_COMPONENT32F, for example), and if I use glClearDepthf() to set the depth buffer to a constant matching selected pixels from the direct method, e.g. if I use glClearDepthf(0.01844) to reset the depth texture, then I get 0.01844 back out. Whereas I don’t get 0.01844 back out when depth texture itself is used.

This doesn’t add up. Your fragment shader has a vec4 output but you’re binding an integer texture as the colour buffer. You’re reading that as GL_INT but say you “get 0.01844 back out”.

You need to explain exactly what you’re doing, without omitting steps or paraphrasing.

For sure. Sorry for the omissions.

According to the documentation for glReadPixels in OpenGL ES 3, “Only two format / type parameter pairs are accepted. For normalized fixed point rendering surfaces, GL_RGBA / GL_UNSIGNED_BYTE is accepted. For signed integer rendering surfaces, GL_RGBA_INTEGER / GL_INT is accepted.” So the only way I can read a 32-bit result, as far as I understand, is with the GL_RGBA_INTEGER/GL_INT pairing. However, the value that I really want is a floating-point value. So in the shader, I output things as a vec4 of floats still, and then after I get my value back with glReadPixels, I just interpret the values I read as floats. For constant values, this seems to work just fine. Ideally, I would have used a floating-point texture and then read GL_FLOAT with glReadPixels(), but unfortunately it seems that ES does not allow that, hence this odd workaround.

I see.

My interpretation of the spec is that for a GL_RGBA32F texture, the implementation-defined format/type pair will be GL_RGBA/GL_FLOAT, as that is the only combination listed in table 3.2 for that internal format. Although the language is a bit vague. If you’re getting the correct value for a constant, I’d assume that the use of an integer texture is valid and that it’s just writing the IEEE-754 representation of the float to the integer texture. FWIW, you can do that “legitimately” with floatBitsToInt. Implementations aren’t required to use IEEE-754, that seems to be universal on desktop systems; I don’t know about mobile GPUs, though.

The reported error (~1/584) is far greater than can be explained by the mismatch between the normalised and float formats (and in any case, you have exactly the same issue when you read normalised data using glReadPixels or glGetTexImage with GL_FLOAT). Have you checked for alignment errors (i.e. sampling the wrong pixel)? Does using texelFetch(sampler, ivec2(gl_FragCoord.xy), 0) produce the same result?

Personally, I’d check whether the relationship is 1:1, i.e. does each “direct” value correspond to exactly one “indirect” value. If so, I’d do some basic statistical analysis on the results (minimum/maximum error, regression coefficients) to see if there’s an obvious pattern.

1 Like

Sorry for the late reply. Had other problems come up in this project that I had to prioritize first.

So, I’m embarrassed to say that the values are lining up now, and in fact, I can’t seem to replicate the problem that I had before. If I ever come across the reason for the discrepancy again, I’ll update this post with what the problem was, but at this point I think it might have been an error with how I recorded the original “direct” values earlier on. I’m marking your latest post as the solution since it is what motivated me to record all of the values again from scratch.

Also, it turns out that reading GL_FLOAT with glReadPixels() in ES 3 is fine; the wording in the documentation is a bit misleading, I feel, but I’m happy that it works, at least. Thanks for the help!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.