i’m accessing a depth texture in my shader (which is taken from the depth buffer), i would assume that this depths range from 0.0 to 1.0 but a value of 0.5 is not half way!, so what depth is it really? can this be changed? or is there some way my shader can determine the max/min values of the depth buffer using build in OGL variables?

but a value of 0.5 is not half way!

Because a perspective depth buffer is not linear. Try a projection that does not do a perspective divide like the ortho projection. Or make up your own wacky projection.

You could also not use your depth values in the depth buffer and just make your own by computing in a vertex program scale * (lightpos - vertex), then take that value in a fragment program and do a length calculation on it and store that in an alpha channel of a render texture or just use it directly from there in your fragment program.

-SirKnight

SirKnight is right, as so ofen before The depth buffer is not linear.

I also use depth values in some of my fragment programs, and have found that the following conversion formula tend to give the correct results:

BufferDepth = (F*N/LinearDepth+F)/(F-N)

F is the value of the farplane and N is the value of the nearplane. These values can easily be passed to the fragment program. If you need to calculate the linear depth, you can rearange the formula.

rearanged:

where depth.z before is the pixels depth, after it should be a LinearDepth in the range 0 to 1?

```
depth.z = ((gl_DepthRange.far * gl_DepthRange.near)
/ (depth.z * gl_DepthRange.diff)) - gl_DepthRange.far;
```

doesnt seem to work :-/

rearanged:

depth.z = ((gl_DepthRange.far * gl_DepthRange.near)

/ (depth.z * gl_DepthRange.diff)) - gl_DepthRange.far;

I have it like this in my fragment program:

```
-(near*far)/((zSize*(far-near))-far);
```

It seems like you forgot the minus in the front. The minus is there because its the distance away from the nearplane, or something like that ;-). Also make sure that your “gl_DepthRange.diff” is far-near.

It should work, it does for me anyways

depth.z = ((gl_DepthRange.far * gl_DepthRange.near)

/ (depth.z * gl_DepthRange.diff)) - gl_DepthRange.far

Your parenthesis look wrong…

```
depth.z = (gl_DepthRange.far * gl_DepthRange.near) / ((depth.z * gl_DepthRange.diff) - gl_DepthRange.far)
```

okay, i tried displaying the depth as pixel colour, and all i get is black!

here’s my shader (if i leave out the equation, i get depth values displayed):

```
uniform sampler2D blurTex; \
uniform sampler2D depthTex; \
\
void main() \
{ \
float focalDepth = 0.3; \
vec4 depth = texture2D(depthTex, gl_TexCoord[0].st); \
vec4 unBlur = texture2D(blurTex, gl_TexCoord[0].st); \
\
float d = -(gl_DepthRange.far * gl_DepthRange.near) \
/ ((depth.z * gl_DepthRange.diff) - gl_DepthRange.far); \
\
unBlur.rgb = vec3(d,d,d); \
unBlur.a = 1.0; \
gl_FragColor = unBlur; \
} \
";
```

Get rid of the -ve at the start of the equation. It is not correct and you will be negating your depth values, which are then clamped to zero (hence “Black”)

removing the -ve doesnt still results in black :-/

```
float depth;
float3 zSizetex = texRECT(depthScene, texCoord);
near_far.x = 10.0;
near_far.y = 2000.0;
float zSize = -(near_far.x*near_far.y)/((zSizetex*(near_far.y-near_far.x))-near_far.y); //length(zSizetex);
if(zSize < c.y)
depth = (zSize - c.y)/(c.y-c.x);
else{
depth = (zSize-c.y)/(c.z-c.y);
depth = clamp(depth,0,c.w);
}
depth = depth*0.5f + 0.5f;
color.a = depth;
```

This is my shader.

c.x = Near blur depth

c.y = Focal plane

c.z = Far blur depth

c.w = clamp value

the shader is used to calculate the blur depth, and store it in the alpha channel (depth of field). It is sucessfull for me, in that it changes the nonlinear depth from the texture to a linear depth. And the minus is supposed to be there, when it is used in this way anyways.

See ATIs “The making of Ruby” for more information on the depth of field effekt.

could i get you to try it with gl_DepthRange (this is a built in OGL strucutre btw)

```
struct gl_DepthRangeParameters {
float near; // n
float far; // f
float diff; // f - n
};
uniform gl_DepthRangeParameters gl_DepthRange
```

I belive that function is only build into the OGL Shading Language. I am using Nvidias Cg for this application. Sorry I did not elaborate on this earlier.

Maby you can pass the plane values yourself, then I guess it would be the same as I am doing with Cg.

If you would like a copy of my OpenGL class and the shaders for the depth of field, you can write me on: peter_dahl (at) hotmail.com.