Hi everyone,
I’m porting some code written for the desktop using OpenGL 3.3 to mobile using OpenGL ES 3.0. It’s mostly been a straightforward process, except for a portion of the code that uses glReadPixels() with GL_DEPTH_COMPONENT in order to read the depth buffer.
However, glReadPixels() cannot be used with GL_DEPTH_COMPONENT in ES 3.0, and the only workaround I’ve seen mentioned is to use a depth texture, render it to a quad, and then use glReadPixels() on the colour output of this quad rendering. I mostly have this process working. The depth image that gets displayed looks pretty accurate to the naked eye. And the same number of unique depth values get printed in both the direct and “indirect” methods, which makes me feel that precision isn’t to blame. But when I print out the values as text, they do not line up exactly. E.g. I’ll get 0.01844 with the direct method vs 0.0201528 with the indirect method for a given pixel. By testing around with a shader that only outputs single constant values, I’m quite confident that the process of reading the final colour values from the quad is not an issue. I am led to believe that the process is either in the creation of the depth texture or the sampling of it in GLSL.
Code for creating the depth texture and attaching it to the non-quad framebuffer:
glGenTextures(1, &depthTextureID);
glBindTexture(GL_TEXTURE_2D, depthTextureID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTextureID, 0);
Then attaching it to the framebuffer where the 3D objects are rendered:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, depthTextureID, 0);
C++ code for rendering the quad:
glClear(GL_COLOR_BUFFER_BIT);
depthDisplayShaderProgram->bind();
glBindTexture(GL_TEXTURE_2D, depthTextureID);
glDisable(GL_DEPTH_TEST);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, (GLvoid*)0);
GLSL fragment shader code:
layout(location = 0) out vec4 fragColor;
in vec2 vTexCoords;
uniform sampler2D sampler;
void main(void)
{
float d = texture(sampler, vTexCoords).r;
fragColor = vec4(d, d, d, 1.0f);
}
I’m not sure what might account for the discrepancy in values I’m seeing. Is there some setting I should still change? Is the mismatch simply unavoidable? Would very much appreciate any suggestions.
Update:
If I use glClearDepthf to set the depth buffer/texture to some constant value, then the result is identical. So maybe it is a problem with reading the colour values, but said problem just doesn’t reveal itself when all of the values are constant? I’ll share the code relevant for reading the colour output if it’s requested, but it’s a bit on the long side, so I’m hesitant to do so unless the above definitely looks fine.