Urgent: Accessing Depth Texture FBO in GLSL

I have a depth texture with the format GL_DEPTH_COMPONENT24 and I was wondering how you blit it to the screen for debugging purposes. How would you go about accessing these depth formats in a glsl. Are the depth values stored in r/g/b/a. How exactly do I go about this?


Thanks allot for the quick response. Will give it a go and get back to you.

Why is it that the depth texture only shows visible depth values when you very close to an object? Any way of making it more visible?

Why is it that the depth texture only shows visible depth values when you very close to an object?

Because that’s how depth works. Only the nearest objects have small values; most object have values close to 1.0. That’s why directly visualizing a depth buffer is generally not useful unless you do some transformations do the depth data.

Yeah, for a perspective projection, see this:

Do what they’re doing, except that the last line of their LinearizeDepth(vec2 uv) function is wrong. Instead, you want:

return (n * z) / ( f - z * (f - n) );

You’ll notice that, unlike their function, the above for z = 0 (near clip value) results in 0 (black) and z = 1 (far clip value) results in 1 (white). You can easily derive this from the perspective definition of eye-space z (z_eye) as a function of the depth buffer value (z):

eye.z = n * f / ((z * (f - n)) - f);

To map -n…-f to 0…1 (black…white), just negate, subtract n, and divide by (f-n), simplify, and you’re there.

If you don’t do this yet, you may need to temporarily change the compare mode:

// draws textured rectangle with shadow depth texture
void CShadow::DrawDebug() const
 glBindTexture(GL_TEXTURE_2D, tex);
  // sample like regular texture
  // draw quad
  // reset compare mode for shadow sampling again
 glBindTexture(GL_TEXTURE_2D, 0);

And in the shader, sample from it like a regular texture.

Buffers a rendering correctly here’s a pic.

Normals, Specular Term, Diffuse Term, Depth

It’s a partial differed renderer that uses pre-pass lighting. I also had a go with best fit normals but it seems to have an effect on performance when in has to scale normals in the fragment shader making it not a viable choice. I’m busy with the material system and I’m having a problem with the piece of code below.

code = "#version 120

	"uniform mat4 u_proj;"
	"uniform mat4 u_view;"
	"uniform mat4 u_world;"

	"uniform mat4 u_worldinvtrans;"
	"uniform mat4 u_viewinv;"
	"uniform mat4 u_vpinverse;"

	"attribute vec3 a_pos;"
	"attribute vec2 a_uv;"

	"varying vec2 v_uv;"
	"varying vec4 v_pos;"

	"void main() {"

		"vec4 worldPos=u_world*vec4(a_pos,1);"
		"vec4 viewPos=u_view*worldPos;"

		"vec4 p = u_proj*viewPos;"
		"gl_Position = u_proj*viewPos;"
		"v_uv = a_uv;"
		"v_pos = gl_Position;"

code = "#version 120

	"uniform sampler2D u_samp;"
	"uniform sampler2D u_light;"

	"varying vec2 v_uv;"
	"varying vec4 v_pos;"

	"void main() {"
		"vec4 c = texture2D(u_samp,v_uv);"
		"vec4 l = texture2D(u_light,(v_pos.xy+1)*0.5f);"
		"gl_FragData[0] = vec4(c.rgb*l.rgb+l.a,1);"



The shader is used when rendering a mesh on a second pass and adding the lighting from the light buffer.

The problem is laying with this piece of code.
vec4 l = texture2D(u_light,(v_pos.xy+1)*0.5f);
The texture coordinates it is producing is wrong. The light buffer is binded correctly and the uniform u_light is set correctly so it’s not the problem.

The problem is sorted out with this line of code.
“vec4 l = texture2D(u_light,(v_pos.xy/v_pos.w+1)*0.5f);”

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.