Ok sure, basically here is what I am doing. Assume I have an OpenGL context of size 2000 x 2000. Now I want to use that main context for several viewports. In one of those viewports I want to draw a scene and capture the depth information to a texture. Lets assume that viewport’s lower left corner is currently at (700, 300) and that the viewport is of size 550 x 200. Then I make the following calls…
I set up the texture using…
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 1024, 256, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, NULL);
Then I draw into the viewport and then I try to copy the depth information using…
glCopyTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 700, 300, 550, 200);
So basically, the texture is size 1024 x 256 in video memory and I copy a subImage into it of size 550x200 starting at the screen coordinates (700,300).
I wish I could attach an image but I can’t… but basically the texture I get looks something like the following…
xxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxx
ttyyyyyyyyyyyyyyyxxxx
ttyyyyyyyyyyyyyyyxxxx
ttyyyyyyyyyyyyyyyxxxx
ttyyyyyyyyyyyyyyyxxxx
tttttttttttttttttxxxx
Where the "x"s are non-initialized data (as I did not copy anything into that area, that is expected), the “y”'s are correct depth information (my scene’s depth info) and the "t"s are the garbage (it is actually a bunch of 0.0s instead of the clear value which is 1.0) I am talking about?? Why are the "t"s there?