Hi. I’m trying to implement z-buffer shading, in which a fragment’s normal is derived by taking the gradient via central differencing across the fragment’s corresponding texels in a depth texture. The general idea for computing the normal is:

```
normal.x = depth to the right - depth to the left
normal.y = depth above - depth below
normal.z = 1 / 2 ^ bits of precision
normal = normalize(normal)
```

Essentially, I’m computing a normal map on the fly. If I was computing this offline on an unsigned byte image, normal.z would be 1. However, I’m trying to do this in a fragment program, and I’m confused on what normal.z should be. My thoughts are that normal.z should be the reciprocal of the number of unique values for the given precision. For 8-bit depth this would be 1/256, 16-bit would be 1/2^16, 24-bit would be 1/2^24, etc. The value of normal.z should correspond with a unit change of depth.

But with a 24-bit depth texture, it seems that normal.z = 1/256 is correct. Any lower values washes out normal.xy. Here’s a run with 1/256 and a 24-bit depth texture:

This result makes me doubt that I’m getting 24-bit precision out of the depth texture. Should I be getting 24-bit values when I do texture lookups?

Here’s my fragment shader that does the lookup and shading:

```
uniform sampler2D depth_map;
uniform float normal_z;
uniform vec2 e1;
uniform vec2 e2;
const vec3 Kd = vec3(0.8, 0.8, 0.8);
const vec3 Ka = vec3(0.2, 0.2, 0.2);
const vec3 Ks = vec3(1.0, 1.0, 1.0);
const float shininess = 90.0;
vec3 shading(vec3 src, vec3 N, vec3 L, vec3 V) {
float n_dot_l = dot(N, L);
vec3 H = normalize(L + V);
vec3 specular = Ks * pow(max(dot(H, N), 0.0), shininess);
vec3 diffuse = Kd * max(n_dot_l, 0.0);
return (diffuse + Ka) * src + specular;
}
void main() {
vec2 depth_coord = gl_FragCoord.xy / 512.0;
vec4 src = vec4(1.0);
vec3 light_vec = normalize(gl_LightSource[0].position.xyz);
vec3 normal;
normal.x = texture2D(depth_map, depth_coord + e1).r -
texture2D(depth_map, depth_coord - e1).r;
normal.y = texture2D(depth_map, depth_coord + e2).r -
texture2D(depth_map, depth_coord - e2).r;
normal.z = normal_z;
normal = normalize(normal);
src.rgb = shading(src.rgb, normal, light_vec, light_vec);
gl_FragColor = vec4(src.rgb, 1.0);
}
```

The viewport is 512x512. I index into the depth texture by taking the scaled screen coordinate. The light source and eye are at the same location, and the model is white. To do the central differencing, I find the neighbors of the fragment’s texel using e1 = vec2(1/texwidth, 0) and e2 = vec2(0, 1/texheight). I purposefully use an orthographic projection and place the near and far clipping planes inside the model so that I can have an ample number of unique depth values. The model is supposed to be faceted. The depth gradient doesn’t change across a polygon in an orthographic projection.

My depth texture and FBO are setup like so:

```
glGenTextures(1, &tex_id);
glBindTexture(GL_TEXTURE_2D, tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, FBO_SIZE,
FBO_SIZE, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_LUMINANCE);
```

```
int depth_bits;
glGetTexLevelParameteriv(GL_TEXTURE_2D, 0, GL_TEXTURE_DEPTH_SIZE,
&depth_bits);
std::cout << "texture depth_bits: " << depth_bits << std::endl;
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, depth_fbo);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT,
GL_TEXTURE_2D, tex_id, 0);
glDrawBuffer(GL_NONE);
glReadBuffer(GL_NONE);
glGetIntegerv(GL_DEPTH_BITS, &depth_bits);
std::cout << "depth_bits: " << depth_bits << std::endl;
GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT) {
CJ_ALERT("Bad status: %x", (int) status);
}
```

I’m getting 24 for both the glGetInteger calls and no error. But I don’t believe I’m getting that precision in the fragment shader. Does anyone see where I might be going wrong? How do high-precision depth textures and GLSL work together?

The orange book and other posts state that I need to use a shadow2DSampler for doing depth texture lookups. I’ve tried this and I get identical results.

Thanks for any help.

- Chris