# Reading the depth of a pixel into an image

Hi,

Is it possible to read the depth of a pixel and encode it in a color? Basicaly I want to get the distance of a pixel from my camera for use in my app. I think it should work like this:

1. pass the near and far clip planes to the shader
2. getting the depth with gl_FragCoord.z and encode it in a color value
3. render the scene
4. read the color from the pixel of wich I want the distance
5. decode the distance

I just can’t get my head around the math involved here.

uniform float fNear;
uniform float fFar;

void main()
{
float fDepth = gl_FragCoord.z / gl_FragCoord.w;
float fColor = 1.0 - smoothstep(fNear, fFar, fDepth);
gl_FragColor = vec4(fColor, fColor, fColor, 1.0);
}

But the shader only draws white.

Thank you!

You could use the built-in struct gl_DepthRange to get far, near and diff (far - near) if you want. But it is not the problem now.

float fDepth = gl_FragCoord.z / gl_FragCoord.w;

It returns mostly values between 0.9 and 1.0. I think is because z is non-linear.

float fColor = 1.0 - smoothstep(fNear, fFar, fDepth);

Surely you are using gluPerspective and the value for fNear is 1.0 or superior so smoothstep is always returning a 0.

So fColor is 1.0

gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);

And this is white color.

So you could eliminate the uniform variables and change the following line:

smoothstep(gl_DepthRange.near, gl_DepthRange.far, fDepth);

Until I know in the FS gl_DepthRange.near is always 0.0 and gl_DepthRange.far is always 1.0 so you will achieve what you want but the difference in colour will be small.

If you want to linearise your depth as you defined it in your program you should do…

VS
varying float depth;

depth = (gl_ModelViewMatrix * gl_Vertex).z;

FS
varying float depth;

float fDepth = depth * gl_FragCoord.w;

I think it should work properly.

I have a doubt too…

If I want to modify the distance in the FS and I have the following lines:

fDepth -> linear depth using uniform values go get values from gluPerspective (from 0 to 300)
fOperation -> small value calculated in the FS and it is always the same for that fragment (from 1 to 10)
gl_FragCoord.w -> 1/300

If I use this, there is not problem
gl_FragColor = fDepth * gl_FragCoord.w;

But the following one, it gives me worries
gl_FragColor = (fDepth - fOperation) * gl_FragCoord.w;

I have two problems:

1- The occlusions with the code which is not processed by the Shaders fails
2- The occlusions in objects that use the Shaders change with the distance.

Thanks

fusion44, if you did not change the depth range (see this) you get the fragment depth that is written in the depth buffer, this way:

``````
uniform float fNear;
uniform float fFar;

void main()
{
float fDepth = gl_FragCoord.z / gl_FragCoord.w;
float fColor = 0.5 * fDepth + 0.5;
gl_FragColor = vec4(fColor, fColor, fColor, 1.0);
}

``````

By default, depth values are mapped to the [0 1] range. The fDepth value after perspective division is in the [-1 1] range.