Depth buffer to Z distance

Hey everybody,

I want to compare Z-values in a shader so i can implement shadowmapping.

I first render a depth-map from the light-viewpoint and then i pass this depth-map to each shader.

then i calculate the vertex depth in the vertex shader, and pass it as TEXCOORD to my fragment shader.


  // vert depth
  OUT.FDepth  = mul(modelViewProj, IN.Position).z;

I end up with depth values per fragment.

Now i want to compare depth values from my depth buffer with the depth of the fragment. (after generating the correct texture coordinates) So i need to convert my non-linear depth-buffer values. What is the right formula to do that?
I already checked wiki but it only explains the relation between z/w.

I dont know if you have already, but take a look at this tutorial:

thanks, i haven’t seen that one before.
that approach still doesn’t work. I use CGC though, so i don’t have acces to gl_TextureMatrix in shaders for example, so i pass the matrix as a uniform parameter and set GL_TEXTURE_MATRIX to identity. I’m gonna proceed working with your example just because it seems the simplest implementantation available.

now my vertex shader looks like this:

// App to vertex
struct a2v
  float4 Position : POSITION;
  float3 normal   : NORMAL;
  float2 tc0      : TEXCOORD0;

// Vertex to pixel shader structure
struct v2p
  float4 Position       : POSITION;
  float2 tc0            : TEXCOORD0;

varying float4 vPos;

void main(
  // a2v
  in  a2v IN,
  out v2p OUT,

  // Constants
  uniform float4x4 modelViewProj,
  uniform float4x4 texMat,
  uniform float4x4 modelView
  vPos = mul(texMat, IN.Position);

  // Transform to clipspace


struct v2p
  float2 tc0            : TEXCOORD0;

void main(

	  // In
	  in v2p IN,

	  // Out
	  out float4 outColor : COLOR,
	varying float4 vPos,
	  // Constants
	uniform sampler2D stex : TEXUNIT7

float4 shadowCoordinateWdivide = vPos / vPos.w ;
// Used to lower moiré pattern and self-shadowing
vPos.z += 0.0005;
float distanceFromLight = tex2D(stex,;
float shadow = 1.0;
if (vPos.w > 0.0)
	 shadow = distanceFromLight < shadowCoordinateWdivide.z ? 0.5 : 1.0 ;
outColor.rgb =	 shadow * 1.0f;
outColor.a = 1.0f;

No result, everything is gray colored with no texture…

PS: My depth map is black to greyish without any whites, that should not be right?

Last time this happened to myself was when I used a extension without the ARB at the end. But I dont think it is your case.

Anyway, I dont have experience with Opengl 3.0 as the code of the generated shader by CGC. Maybe it would be easier to just code instead of convert.

Best luck at it!

Given what code you’ve posted, it could be many things. Probably what you should do is whittle this down to a small complete GLUT test program and post it. You’ll probably find the error as you do this. If not, it’ll give us something concrete to go on.

Just looking at your code, it could be that your computation of the shadow transformation matrix (texMat) is wrong. With your shader code, this must take you from your object-space through light NDC space + the -1,1->0,1 shift/scale to shadow map space.

It could be that you’ve captured the light-space depth buffer wrong.

Also, first-cut shadow mapping should probably just use a GL_DEPTH_COMPONENT texture and let the hardware do the depth compare, but you’re using a sampler2D instead. Why? What texel format does this texture have? How did you get the light space depth buffer into this texture format? Have you read back this texture and validated that it isn’t all 0s or all 1s?

I notice you’re trying to pull the shadow map depth value out of the .z component. Why? If this is just a depth map, it’s not extremely likely that the depth value is only going to be stored in this component.

Have you checked for GL errors (glGetError)?

And I’d comment out the vPos.z += 0.0005 acne hack until you actually get shadow mapping working.