Displaying Depth Buffer

I am having difficulty displaying the depth texture created from the depth buffer of a pbuffer. I can get an RGBA texture created from the pbuffer to display on a quad no problem. The ARB_DEPTH_TEXTURE spec states that it will treat the texture as a Luminance texture if the unit expects rgba and gets a depth texture. But all the format conversion stuff has turned my brain to spaghetti. The best I can get is a kind of double-vision zoomed version of the depth buffer - and this is with what seems incorrect formats.

Tons of thanks in advance for any insight/advice/answers… :slight_smile:

First, I create a pbuffer with 24-bit depth.

Then I activate it and render a scene to it.

Then I read the depth buffer with:
glReadPixels(0, 0, TEX_SIZE,TEX_SIZE,GL_DEPTH_COMPONENT,GL_UNSIGNED_INT, texdata);
(Should this be GL_FLOAT instead of GL_UNSIGNED_INT - aren’t the depth values clamped to (0,1.0]?)

Then I create the depth texture with:
glBindTexture(GL_TEXTURE_2D,depth_map);
glTexImage2D(GL_TEXTURE_2D,0, GL_DEPTH_COMPONENT24_ARB,TEX_SIZE,TEX_SIZE,0,GL_DEPTH_COMPONENT,GL_UNSIGNED_SHORT,texdata);
(Again, this is the only way it partially works. If I use GL_UNSIGNED_INT to match the glReadPixels rather than a short it just displays a white quad.)

Finally, I draw a textured quad with:
glBindTexture(GL_TEXTURE_2D,depth_map);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_NONE);
glTexParameteri(GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE_ARB,GL_LUMINANCE);
//Draw texture quad here…

If my pbuffer has 24-bit depth, what should be the format to glReadPixels? Likewise, what should the format be to glTexImage2D?

I am planning on using glTexSubImage2D and implementing shadow mapping later, but first I wanted to make sure I was generating the correct depth texture.

Thanks again for any help

Well I finally got it to work (albeit very slowly) so I thought I would post in case someone is interested.

In the long run, glCopyTexSubImage is the way to go(actually eventually I should use Render-To-Texture on Windows at least), but I was having problems displaying it since it reads 24-bit float depth values into the 32-bit int depth texture.

Now the spec says the depth texture is treated as a luminance texture which I am guessing has 16-bits per texel. Thus it may only use the high-bits of the texture.

In any case, it all shows up solid white - thus my problem. There may be a better work around…Please let me know if there is.

Just to verify I had the right depth, I coded a very very slow work around:

  1. Read the depth buffer into a uint array:
    glReadPixels(0,0,TSIZE,TSIZE,GL_DEPTH_COMPONENT, GL_UNSIGNED_INT,depthdata);

2)Copy the uint array to a short array:
For(x…)
For(y…)
texdata[x,y]=depthdata[x,y];

  1. Copy to texture
    glTexImage2D(GL_TEXTURE_2D,0,GL_DEPTH_COMPONENT24_ARB, TSIZE,TSIZE, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT,texdata);

  2. Render to quad as before

This brings the glTexSubImage2D ~75fps down to about ~2fps but at least I can finally see the pbuffer depth buffer :slight_smile:

Now I can comment out all that stuff, use glCopyTexSubImage and move on to the actual shadow mapping…

Your source format for glTexImage really should be the same as for ReadPixels (ie GL_UNSIGNED_INT in this case).

Make sure you use some of your depth range. If your geometry is concentrated at either the near or far clipping plane, it’d no wonder if you don’t see anything interesting.

The following code snippet is from the nVidia demo from their SDK (Simple Render to depth texture). Thought it might help you.

// A depth texture can be treated as a luminance texture
glBindTexture(GL_TEXTURE_2D, light_depth);
// Disable the shadow hardware
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_SGIX, GL_FALSE);

if (wglBindTexImageARB(pbuffer.hpbuffer, WGL_DEPTH_COMPONENT_NV) == FALSE)
    wglGetLastError();

glBegin(GL_QUADS);
glTexCoord2f(0,0);
  glVertex2f(-1,-1);

glTexCoord2f(0,1);

glVertex2f(-1, 1);

glTexCoord2f(1,1);

glVertex2f( 1, 1);

glTexCoord2f(1,0);

glVertex2f( 1,-1);
glEnd();

glEnable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);

// Enable the shadow mapping hardware
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_SGIX, GL_TRUE);

if (wglReleaseTexImageARB(pbuffer.hpbuffer, WGL_DEPTH_COMPONENT_NV) == FALSE)
    wglGetLastError();

Perhaps the geometry was really concentrated at the end of the clipping plane. I will double check. Also - it make alot more sense that the format for glTexImage - I don’t know why it was only working with the conversion to a short.

Thanks again for your input and thanks for the nvidia code rgpc

Do you really need to do that ReadPixels stuiff? Why not glCopyTextureImage?

Zengar - the glReadPixels stuff was only for testing. You are absolutley correct - use glCopyTexSubImage2D or even better render-to-texture(if you are on Windows).