Render depth to texture issue

I’m implementing a deferred shader and am running into issues obtaining a 3D view space position from the depth buffer and screen space position. If I attach a depth renderbuffer for depth testing and write the depth values to a colour attachment, everything works as expected, like so:


    glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, depthBuffer);
    glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT, GL_DEPTH_COMPONENT, w, h);
    glFramebufferRenderbufferEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER_EXT, depthBuffer);

    ...

    glGenTextures(1, &texDepth);
    glBindTexture(GL_TEXTURE_2D, texDepth);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_FLOAT32_ATI, w, h, 0, GL_RGB, GL_FLOAT, NULL);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT2_EXT, GL_TEXTURE_2D, texDepth, 0);

    GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT, GL_COLOR_ATTACHMENT2_EXT};
    glDrawBuffers(3, buffers);

    /////////////////////////////////////////////////////////////////////////////////////////////////////////

    // G Buffer fragment shader
    gl_FragData[0] = vec4(oDiffuse, 1.0);
    gl_FragData[1] = vec4(0.5f * (normalize(oNormal) + 1.0f) , 1.0);
    gl_FragData[2] = vec4(oDepth.x / oDepth.y, 1.0, 1.0, 1.0);

    /////////////////////////////////////////////////////////////////////////////////////////////////////////

    // Lighting fragment shader
    float Depth    = texture2D(iTexDepth, oTexCoords).r;
    vec4  Position = vec4(oTexCoords.x * 2.0f - 1.0f, (oTexCoords.y * 2.0f - 1.0f), Depth, 1.0);

    Position  = mtxInvProj * Position;
    Position /= Position.w;

However, when I try and use GL_DEPTH_ATTACHMENT_EXT as the attachment point (see listing below), I get incorrect results. Lighting changes with camera position, triangles facing away from the light source are lit and so on. When I display just the depth buffer, data is being written but it seems to be much more “bunched together” than when using GL_COLOR_ATTACHMENT2_EXT as the attachment point. For example, if I move the camera towards the mesh with the latter, the mesh depth values “pop” into view much more gradually than the former, so i figured that when I’m reconstructing the view space vector for a given fragment, the incorrect result is throwing off my point lighting. Any ideas?



    glGenTextures(1, &texDepth);
    glBindTexture(GL_TEXTURE_2D, texDepth);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, w, h, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);
    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_BORDER);
    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_BORDER);
    glTexParameteri (GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
    glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, texDepth, 0);

    GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT};
    glDrawBuffers(2, buffers);

      /////////////////////////////////////////////////////////////////////////////////////////////////////////

    // G Buffer fragment shader
    gl_FragData[0] = vec4(oDiffuse, 1.0);
    gl_FragData[1] = vec4(0.5f * (normalize(oNormal) + 1.0f) , 1.0);

    /////////////////////////////////////////////////////////////////////////////////////////////////////////

    // Lighting fragment shader
    float Depth    = texture2D(iTexDepth, oTexCoords).r;
    vec4  Position = vec4(oTexCoords.x * 2.0f - 1.0f, (oTexCoords.y * 2.0f - 1.0f), Depth, 1.0);

    Position  = mtxInvProj * Position;
    Position /= Position.w;

I’m using ATI hardware on Win32 platform. Thanks!

gl_FragData[2] = vec4(oDepth.x / oDepth.y, 1.0, 1.0, 1.0);

I don’t know what oDepth is but I suspect it’s ecPosition (aka eye-space position)?
In which case what you have written out is a linear eye-space depth value which is not quite the same value as the depth buffer contains.
You have two choices:

  1. Write out the linear depth; or just write out the eye-space position X,Y,Z values. (That’s what I do !)
    This has the advantage that it’s dead easy and fool-proof to retrive a pixel’s eye-space position without any messing about.
    Downside is an extra render target.

  2. Use the depth buffer/texture. Convert the non-linear depth value into eye-space, which will depend upon the zNear and zFar values of your projection matrix.

To elaborate on the depth reconstruction from a depth texture.
The main part of this post comes from similar topics which have come up before on this forum (Dark Photon and others); I have taken what I think should work but have never had the guts/time to rip my engine apart any try it.
Please give it a go and see what you get.


vec3 PositionFromDepth_DarkPhoton(in float depth)
{
  vec2 ndc;             // Reconstructed NDC-space position
  vec3 eye;             // Reconstructed EYE-space position

  eye.z = near * far / ((depth * (far - near)) - far);

  ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
  ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;

  eye.x = ( (-ndc.x * eye.z) * (right-left)/(2*near)
            - eye.z * (right+left)/(2*near) );
  eye.y = ( (-ndc.y * eye.z) * (top-bottom)/(2*near)
            - eye.z * (top+bottom)/(2*near) );

  return eye;
}

which simplifies to…

eye.x = (-ndc.x * eye.z) * right/near;
eye.y = (-ndc.y * eye.z) * top/near;

And note that typically you don’t store Z_ndc NDC-space depth (-1…1) in a depth texture. You usually store Z_viewport – that is viewport-space depth (0…1, or whatever you set glDepthRange to). But undoing that mapping to get to Z_ndc is easy.

Referring to the projection matrix, for a perspective projection you have:

z_ndc = z_clip / w_clip
z_ndc = [ z_eye*gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z ] / -z_eye

The 2nd step presumes w_eye = 1. Solve the above z_eye, and you get:

float z_eye = gl_ProjectionMatrix[3].z/(-z_ndc - gl_ProjectionMatrix[2].z);

Typically your glDepthRange is 0…1, so z_ndc = z_viewport * 2 - 1, so plugging that in…

float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

That’ll get you from viewport-space Z to eye-space Z, for a perspective projection.

…Note: my own algeba slightly diagrees with the source I got this from and I make this instead:
float z_eye = -gl_ProjectionMatrix[3].z/((z_viewport * -2.0) + 1.0 - gl_ProjectionMatrix[2].z);

Hi BionicBytes, thanks for going into so much detail!

don’t know what oDepth is but I suspect it’s ecPosition (aka eye-space position)?

Sorry, I should have made this clear. For the first (successful) method where I store the depth in a colour attachment, Depth.x and Depth.y are the z and w values of the projected vertex, as in:


// Vertex shader
gl_Position = gl_ModelViewProjectionMatrix * iPosition;
oDepth      = gl_Position.zw;

What I do is obtain the x and y components of the NDC using the texture coordinates of the quad in the same manner as you and use the sample from my depth texture as the z value. I then multiply by the inverse of the projection matrix and divide the resulting vector by it’s w component to get my fragment position in eye space. I guess like you say, when I sample the depth buffer “directly” by using GL_DEPTH_ATTACHMENT_EXT as the attachment point, the non-linear depth values are messing up my transformation into eye space.

With the snippet you posted, what are widthInv, heightInv, left, right, top and bottom?

Again, thanks for your help.

With the snippet you posted, what are widthInv, heightInv, left, right, top and bottom

Yes, the window (viewport) dimensions.

Thanks, so if my viewport was, say, 800x600, left would be -400, right would be 400, top would be 300, bottom would be -300, widthInv would be -800 and heightInv would be -600. Is this correct?

Err no!
If viewport is 800 x 600 then right is 800, left is 0.
Top is 600 and bottom is 0
With is 800 and widthinv is 1/800, heightinv is 1/600.

I don’t know where u got your values from?

…for symmetric perspective frustums (i.e. left = -right and bottom = -top).

Hmmm… I tried that snippet, but I didn’t get correct results with either the GL_COLOR_ATTACHMENT2_EXT or the GL_DEPTH_ATTACHMENT_EXT approach. Here it is verbatim, along with other pertinent shader code:


vec3 PositionFromDepth_DarkPhoton(in float depth)
{
	float near = 0.1;
	float far = 50.0;
	float widthInv = 1.0 / 800.0;
	float heightInv = 1.0 / 600.0;
	float top = 600.0;
	float bottom = 0.0;
	float left = 0.0;
	float right = 800.0;

  vec2 ndc;             // Reconstructed NDC-space position
  vec3 eye;             // Reconstructed EYE-space position

  eye.z = near * far / ((depth * (far - near)) - far);

  ndc.x = ((oTexCoords.x * widthInv) - 0.5) * 2.0;
  ndc.y = ((oTexCoords.y * heightInv) - 0.5) * 2.0;

  eye.x = ( (-ndc.x * eye.z) * (right-left)/(2.0*near)
            - eye.z * (right+left)/(2.0*near) );
  eye.y = ( (-ndc.y * eye.z) * (top-bottom)/(2.0*near)
            - eye.z * (top+bottom)/(2.0*near) );

  return eye;
}

float Depth  = texture2D(iTexDepth, oTexCoords).r;
Position.xyz = PositionFromDepth_DarkPhoton(Depth);

// do lighting stuff

Have I missed a trick here? I know I could just carry on rendering z/w (from the projected vertex) using a colour attachment, reconstructing the NDC vector and unprojecting it in the fragment shader because I know that’s working as expected but it’s really bugging me that when using a depth attachment things aren’t going to plan.

Ok, so the closer I get to the model, the more “correct” the result is. When I’m up real close, all the lighting works as it should (i.e. identical to when using the GL_COLOR_ATTACHMENT2_EXT approach). Does this shed any more light on the problem?

Furthermore, if I convert the depth values from both approaches to linear values, the result is not the same:

Linear depth, GL_COLOR_ATTACHMENT2_EXT approach:

Linear depth, GL_DEPTH_ATTACHMENT_EXT approach:

I fixed it! The key was this particular line in the specs for the glDepthRange function:

After clipping and division by w, depth coordinates range from -1 to 1, corresponding to the near and far clipping planes. glDepthRange specifies a linear mapping of the normalized depth coordinates in this range to window depth coordinates. Regardless of the actual depth buffer implementation, window coordinate depth values are treated as though they range from 0 through 1 (like color components). Thus, the values accepted by glDepthRange are both clamped to this range before they are accepted.

A simple Depth = (2.0f * Depth - 1.0f) solved the problem :smiley: So now I can use the depth buffer I had to attach for z testing in the first place without adding a superfluous colour attachment to store my depth data. Time for sleep!

Damn! BionicByte mentioned that in the 3rd post so I could have saved myself a lot of bother. Never mind heh :slight_smile:

The way I do the “unprojection” phase is by fusing everything within the inverse projection matrix I send to the shader. This includes the viewport detransformation too. In some pseudocode to visualize what I’m saying:


mat4 screen;
screen.Scale(screen_width,screen_height,1.0f);

mat4 bias;
bias.Scale(0.5,0.5,0.5);
bias.Translate(1,1,1);


mat4 unproject = screen * bias * projection;
unproject.Invert();
unproject.Transpose();

And send the unproject matrix to the shader as uniform.
Within the shader:


vec4 frag = gl_FragCoord; //glsl exposes the fragment already scaled by W, but the W component stores 1.0f/W
frag.w = 1.0; //W needs to be set to 1.0 for it to be correct
frag.z = depth; //depth is sampled directly from the depth texture. NOT linearized.

frag = unproject * frag; //Apply the unprojection
frag.xyz /= frag.w; //Reapply the scaling

And that gives you frag in eye (or camera, if you prefer to call it that) space.

The depth texture I use is normaly a non linear depth attachment declared as GL_DEPTH_COMPONENT32F.

Hope this helps a bit.

float top = 600.0;
float bottom = 0.0;
float left = 0.0;
float right = 800.0;

Can you show us how you set up the projection matrix, just want to check those values above.

vec2 ndc; // Reconstructed NDC-space position

ndc.x = ((oTexCoords.x * widthInv) - 0.5) * 2.0;
ndc.y = ((oTexCoords.y * heightInv) - 0.5) * 2.0;

Remeber NDC range is [-1,1], so what it oTexCoords?
You should have been using glFragCoord as the range for x is from [0,screenwidth], which when multiplied by 1/screenwidth yields a range of [0,1]

Hi BionicBytes, that particular method still doesn’t work but things work fine with the ol’ unprojection approach. I guess I’ll have to get my hands dirty and mess around with deferred shading before re-visit this particular aspect in the near-future.

I’d be interested if you do get it to work using only gl_FragCoord and depth buffer texture.

Keep us posted?

Sure, no problem.