I am having difficulty displaying the depth texture created from the depth buffer of a pbuffer. I can get an RGBA texture created from the pbuffer to display on a quad no problem. The ARB_DEPTH_TEXTURE spec states that it will treat the texture as a Luminance texture if the unit expects rgba and gets a depth texture. But all the format conversion stuff has turned my brain to spaghetti. The best I can get is a kind of double-vision zoomed version of the depth buffer - and this is with what seems incorrect formats.
Tons of thanks in advance for any insight/advice/answers…
First, I create a pbuffer with 24-bit depth.
Then I activate it and render a scene to it.
Then I read the depth buffer with:
glReadPixels(0, 0, TEX_SIZE,TEX_SIZE,GL_DEPTH_COMPONENT,GL_UNSIGNED_INT, texdata);
(Should this be GL_FLOAT instead of GL_UNSIGNED_INT - aren’t the depth values clamped to (0,1.0]?)
Then I create the depth texture with:
(Again, this is the only way it partially works. If I use GL_UNSIGNED_INT to match the glReadPixels rather than a short it just displays a white quad.)
Finally, I draw a textured quad with:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE_ARB, GL_NONE);
//Draw texture quad here…
If my pbuffer has 24-bit depth, what should be the format to glReadPixels? Likewise, what should the format be to glTexImage2D?
I am planning on using glTexSubImage2D and implementing shadow mapping later, but first I wanted to make sure I was generating the correct depth texture.
Thanks again for any help