glReadPixels( GL_DEPTH_COMPONENT )

I can’t figure out what’s going wrong.

if i read in the depth component after a scene render i get all 255 (1.0).

If i read in the color buffer everything seems fine.

I enable depth writing (glDepthMask(true)) so It should be writing something to the depth buffer.

can anyone shed some light on what might be going on?

Setting the depth mask to true is not enough to enable the depth buffer. You need to call glEnable(GL_DEPTH_TEST) to enable depth testing aswell. Also, the depth mask is true by default, so you don’t really need it unless you set it to false at some point and need to re-enable it.

it is set to false elsewhere…so, i force it on for that purpose.

I am enabling depth testing, but i didn’t indicate that because I thought it wasn’t necessary.

I thought writes to the depth buffer still happen even if you have depth testing disabled. From my understanding, it just doesn’t compare the fragments z position to that in the depth buffer if depth testing is disabled.

I figured out that the near clip plane has something to do with it.

I’m in hell.

If depth test is disabled, then everything related to the depth buffer is disabled. That means no depth test (as expected), but also no depth writes. If you want to unconditionally write to the depth buffer, without any testing, set the depth compare function to GL_ALWAYS.

As for the values you get, are you actually drawing something at a depth that would generate something other than 1.0? If you draw your object on the far plane, then everything will get a depth of 1.0.

Also, you said the depth value is 255, does that mean you’re reading the values back as 8-bit unsigned integers? If so, you have a very low precision of the data being read back. Remember that the distribution of depth bits is not linear when using a perspective projection matrix. With a too close near clip plane, this could easily mean that most of the viewing volume mapps to 255 because of the distribution of the depth bits. The larger the ratio of the distance to the near and far clip planes, the more bits are packed closer to the near clip plane. As an example, even at a near:far ratio of 1000, the value 255 will represent over 60% of the entire view volume.

excellent, that’s exactly what I’m finding now.

I want my near clip plane really close (.001). Am I doomed to the scenerio you mentioned? should i bring my far clip plane in?

The pixels I’m reading back with glReadPixels isn’t exactly my concern.

I’m trying to create a shadow map using glTexImage2D(blah,blah,GL_DEPTH_COMPONENT,blah…)

so, OpenGL is responsible for the precision it reads back the pixels from the depth buffer into the texture.

Is there some way to increase the precision? (glPixelTransfer or something?)

If you have the near plane at 0.001, then pretty much everything further away than 0.5 (assuming a far plane larger than 1 or so) will map to 1.0 (at 8-bit quantization). With the standard fixed function pipe, there’s not much, if anything, you can do about that. Anything but increasing the near clip plane of course.

Not that I have used shadow maps, but isn’t there ways to copy the depth buffer directly into a depth texture, using more than 8 bits precision? Like, 16- or 24-bit depth texture formats?

I’m going off of this:
http://www.ampoff.org/modules.php?name=Forums&file=viewtopic&t=15&postdays=0&postorder=asc&start=0

from what I gather, you create a texture passing GL_DEPTH_COMPONENT which flags OpenGL to copy the depth buffer into the texture upon a glCopyTexSubImage. I think it’s using the texture as if the format were GL_LUMINANCE.

The only other route is to write the depth values as the fragment color in a fragment shader in a pbuffer context and then bind the pbuffer as a texture for my shadow map I suppose.

I’m not sure how much overhead is involved in that though…performance is absolutely critical.

I guess I could give it a go and find out.

Looking at the depth texture extension specification, I see that you can request specific bit depths for the internal format. Try a specific format like GL_DEPTH_COMPONENT16/24 (depending on what depth buffer resolution you have). Well, this is heading towards an area I’m not really familliar with.

Cheers man, thanks!