Depth peeling problem

Hi all,

I have posted this morning for a depth buffer problem and it seems to be now resolved using FBO.

I want now to perform some very simple depth peeling. I am just drawing a cube, and I would like to have the second slice (second depth).

The code to perform this is the next one:

// First draw of the scene into the framebuffer object
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);

glBindTexture(GL_TEXTURE_2D, screenTexture[0]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, screenTexture[0], 0);

GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT)
	exit(1);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(45, (float)m_iWidth / (float)m_iHeight, 1.0f, 100);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity ();

glTranslatef(0.0f,0.0f,-z);						// Translate Into/Out Of The Screen By z

glRotatef(xrot,1.0f,0.0f,0.0f);						// Rotate On The X Axis By xrot
glRotatef(yrot,0.0f,1.0f,0.0f);						// Rotate On The Y Axis By yrot

glEnable(GL_DEPTH_TEST);
drawObjects(1, -1);
drawObjects(-1.5, -3.5);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

// Now create the second depth
glEnable(GL_TEXTURE_2D);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glBindTexture(GL_TEXTURE_2D, sliceTextures[1]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16, 512, 512, 0, GL_RGBA, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, sliceTextures[1], 0);

glBindTexture(GL_TEXTURE_2D, screenTexture[1]);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, 512, 512, 0, GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, screenTexture[1], 0);

status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT);
if (status != GL_FRAMEBUFFER_COMPLETE_EXT)
	exit(1);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glBindTexture(GL_TEXTURE_2D, screenTexture[0]);

glUseProgram(program);
glColor4f(1, 1, 1, 1);
drawObjects(1, -1);
drawObjects(-1.5, -3.5);
glUseProgram(0);

glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

// Prepare the matrices to draw a simple texture
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0, m_iWidth, 0, m_iHeight);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);

glColor4f(1, 1, 1, 0.5f);
// Draw the texture
glBindTexture(GL_TEXTURE_2D, sliceTextures[1]);
glBegin(GL_QUADS);
glTexCoord2f(0, 0);
glVertex2f(0, 0);
glTexCoord2f(0, 1);
glVertex2f(0, m_iHeight);
glTexCoord2f(1, 1);
glVertex2f(m_iWidth, m_iHeight);
glTexCoord2f(1, 0);
glVertex2f(m_iWidth, 0);
glEnd();

glDisable(GL_TEXTURE_2D);
glDisable(GL_BLEND);

glutSwapBuffers();

You can see in the code, I use a shader in order to test fragment depth with the previous one comparing these depth with the previous depth buffer stored in a texture.

The vertex shader is:

varying vec4 coord;

void main()
{
gl_Position = ftransform();
coord = gl_Position / gl_Position.w;
coord = (coord + 1) / 2;
gl_FrontColor = gl_Color;
}

Thus, coord is now between 0 and 1, and the fragment shader is:

uniform sampler2D ShadowMap;

varying vec4 coord;

void main()
{
float depth = texture2D(ShadowMap, coord.xy).x;
if (coord.z <= depth)
discard;
gl_FragColor = gl_Color;
}

The result, when I zoom is the next one:

Can you explain me this problem ?

Thank you very much.

Have you tried to put a small bias when doing the depth comparison? I don’t see it in your fragment shader code, this is useful to avoid z-fighting with shadow mapping as well as for depth peeling, in my opinion.

What is it ?

Thanks

you can do something like this, in your fragment shader


#define bias 0.001

void main()
{
float depth = texture2D(ShadowMap, coord.xy).x;
if (coord.z <= (depth + bias))
discard;
gl_FragColor = gl_Color;
}

This way, you push a little bit the depth values you read from the depth buffer and avoid z-fighting.

It looks like the problem is now corrected. It was in the computation of the screen coordinates in the vertex shader using:

gl_Position = ftransform();
coord = gl_Position / gl_Position.w;
coord = (coord + 1) / 2;

Instead, I have to use, in the fragment shader:

coord = gl_FragCoord;
coord.x = coord.x / 512 ( = size)
coord.y = coord.y / 512

And then everything is ok.

Any idea why my coordinates computation was incorrect ?

Thanks

FYI, I have not been able to use #define in my fragment shader. I am using GLSL language.

FYI, I have not been able to use #define in my fragment shader. I am using GLSL language.

Me too, I have used it with a geforce 8800 GS, maybe this is not supported by your hardware, this should be a SM3 or above, functionnality.
Anyway you can just declare a global variable as a float.

Any idea why my coordinates computation was incorrect ?

I don’t understand why. In my opinion in both code samples there is a problem…maybe not visible in the second yet.

Here:



gl_Position = ftransform();
coord = gl_Position / gl_Position.w;
coord = (coord + 1) / 2;

You should just do:
coord.z = coord.z * 0.5 + 0.5;

Apply this offset don’t have sense for the x and y components, you do this just because the z value encoded in your depth texture are between 0 and 1 whereas as the one you compute after projection are between -1 and 1.

For the 2nd sample, I don’t see the relation between your screen size (is it?) and texture coordinates. I wonder myself how you ended to do this. :slight_smile:

First of all, I though after applying ftransform in the vertex shader, the position will be a vector (x y z w) with x y z values between -w and w, but according to you, it is not that ?

Then, gl_FragCoord seems to correspond to the screen coordinates. In my case, the viewport was 512x512, that’s why I divide x and y coordinates by 512. Is is not logical ?

First of all, I though after applying ftransform in the vertex shader, the position will be a vector (x y z w) with x y z values between -w and w

Yes it is true before the perspective division (before dividing by w). I would have been more precise in my last post. But this doesn’t change what I said, I talked about ndc coordinates.

I looked into the glsl specification and yes you are right gl_FragCoord is pixel coordinates.

I am not used to use this last built-in, you should get texture coordinates at vertices using the gl_MultiTexCoord* built-ins: these are your real texture coordinates (the one that you set with glTexCoord*) and it won’t be necessary to divide by your screen size.
And to get the fragment depth gl_FragDepth or gl_FragCoord.z are good.

Yes but the objective here is to transform 3D coordinates into 2D coordinates in order to get the correct texel (on the depthmap texture). Thus, I can’t use glTexCoord to specify texture coordinates.

Everything is perfect using gl_FragCoord, that’s really cool :slight_smile:

Thank you again for your help.

Yes but the objective here is to transform 3D coordinates into 2D coordinates in order to get the correct texel (on the depthmap texture). Thus, I can’t use glTexCoord to specify texture coordinates.

Fair enough! :slight_smile:

You’re welcome.