Sampling Depth Buffer returns only Zero!


I have two framebuffers that I use to render two different objects. Now I want to merge them together before I display at the default framebuffer.

I am sending the color and depth buffers as uniform 2D samplers and depth compare them in a fragment shader to get the winning pixel.

I am able to properly sample the color values but depth values are ‘0’ throught out. I am not sure what might be causing this, but I have enabled depth testing before rendering.

Here’s snippets of my code:

glGenFramebuffers(1, &_fbo);
  glBindFramebuffer(GL_FRAMEBUFFER, _fbo);

  glGenTextures(1, &_cbo);
  glGenTextures(1, &_dbo);

    glBindTexture(GL_TEXTURE_2D, _cbo);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, dim.x, dim.y, 0, GL_RGBA, GL_UNSIGNED_BYTE, nullptr);


    glBindTexture(GL_TEXTURE_2D, _dbo);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, dim.x, dim.y, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, nullptr);

  glBindFramebuffer(GL_FRAMEBUFFER, _fbo);
  glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _cbo, 0);
  glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, _dbo, 0);

This is how I send uniform samplers to the shader.

glUniform1i(glGetUniformLocation(QuadShader.Program, "color1"),0);

glBindTexture(GL_TEXTURE_2D, cbo2);
glUniform1i(glGetUniformLocation(QuadShader.Program, "color2"), 1);

glBindTexture(GL_TEXTURE_2D, dbo1);
glUniform1i(glGetUniformLocation(QuadShader.Program, "depth1"), 2);

glBindTexture(GL_TEXTURE_2D, dbo2);
glUniform1i(glGetUniformLocation(QuadShader.Program, "depth2"), 3);

glDrawArrays(GL_TRIANGLES, 0, 6);

This is how my shader looks:

uniform sampler2D color1;
uniform sampler2D color2;
uniform sampler2D depth1;
uniform sampler2D depth2;

out vec4 FragColor;

void main()
  ivec2 texcoord = ivec2(floor(gl_FragCoord.xy));

  vec4 depth1 = texelFetch(depth1, texcoord,0);
  vec4 depth2 = texelFetch(depth2, texcoord,0);

  if(depth1.z > depth2.z)
    FragColor = texelFetch(color1, texcoord, 0);
   FragColor = texelFetch(color2, texcoord, 0);


Depth textures only have a single channel; the vectors returned from texelFetch() will be [depth,0,0,1]. Use e.g.:

  if (depth1.r > depth2.r)

You are awesome SIR!!! Thank you very much!

@GClements: On another note, I’m wondering this if entire process of sampling framebuffer objects and depth testing to merge fbos is possible in fixed function ?

I’m not entirely sure. With multi-texturing (glTexEnv(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE)), you can subtract textures, and you can scale the result by a factor of 4. Results are clamped to the range [0,1]. You may be able to use multiple stages to increase the overall scale factor to get something sufficiently close to a step function (although you will get intermediate values when the depth values are close). There may be something I’m overlooking, though (in particular, I’m unsure whether the original precision is maintained).

But there’s also the question of: why bother? Systems which don’t support at least OpenGL 2.0 are quite rare at this point. Particularly when you exclude Windows’ software fall-back, as that only supports OpenGL 1.1 (multi-texturing was added in 1.3).

@GClements: Thanks again for your response.

I am working on a version agnostic application where we implement both fixed function and shader based abstracted underneath generic functions. That’s the reason I need to have the support in fixed function as well.

I do not get the idea of multi-texturing. All I need is to choose a winning pixel by comparing the depth buffers right ? Isn’t there an easy way to accomplish this with fixed function as well ?


It might be possible with multi-texturing, but I’m not sure, even if it works it isn’t exactly “easy”, it might not be particularly accurate, and it requires at least 1.3 (so not strictly “version-agnostic”).

Sure. But you need to adapt your thinking to the tools that were well supported by older versions of OpenGL instead.

First, your basic algorithm is to render two objects, keeping fragments which have the greatest depth value.

Your implementation above (with a shader sampling from 2 depth textures and 2 color textures) is only one way to implement this algorithm.

Consider that another implementation (which doesn’t require shaders or FBOs) is just to render both objects into the “same” depth+color buffer pair with depth test enabled. This is well supported by all versions of OpenGL. Further, with MSAA rendering enabled, this latter, revised approach will also give you subsample precision on the depth tests and smoothed edges where the objects intersect, whereas your shader approach above will not (unless you enable SSAA – aka ARB_sample_shading – which is much more expensive than MSAA rendering).

Also note that even with fixed-function OpenGL, unless you’re talking really, really ancient OpenGL, usually some form of shader support is supported. Nowadays, if you want you can intermix some fixed functionality with GLSL shaders, ASM shaders, and/or texture combiners. You need to look at the base OpenGL version and the list of extensions of the OpenGL implementation you’re targetting to determine what the best option is.

Finally a nit: you don’t sample from framebuffer objects (or framebuffers in general). Framebuffers are used as the target for rendering. You sample from textures.