Depth_Stencil problem on Radeon 4850

My deferred engine developed on Geforce 8 is displaying weird stippling artifacts all over the screen on the Radeon 4850 with Cat 9.2 (Open GL 2.1 context). No issue with nVidia, but horrible grainy pattern on the Radeon. I have tracked this down to the full screen lighting phase of the deferred renderer where I enable stencil test and then copy a full screen quad.
I suspect that ATI are not properly supporting the DEPTH24_STENCIL8 as part of the Framebuffer.

Has anyone else used D24_S8 on the Radeon 4xxx ?

On a similar note, has anyone got a quick way to ‘see’ the contents of the stencil buffer. I.e. copy the stencil to a texture so I can visualise it’s contents.

Yeah, I am getting same defect on HD2400 with my deferred rendering. Also DEPTH32F_STENCIL8 doesn’t help. Only thing that helps is not using stencil.

You can try GL_NV_copy_depth_to_color, AMD added this extension in their Catalyst 9.2.

Working fine on Radeon 4850 with both Catalyst 9.1 and 9.2.

NULL_PRT: Thanks for the NV_ext tip.
Are you saying my setup code for depth_stencil texture is wrong?

During the lighting pahse of the derferred engine, I copy the MRT colour texture to the target framebuffer, but select only pixels tagged as “do not light” using a stencil ref value of $01.

[b]bind framebuffer…
set 2D matricies…
glclearcolor (0.0, 0.0, 0.0, 0.0);
glenable (GL_STENCIL_TEST);

glStencilFunc (GL_EQUAL, $01, $FF); //pass stencil test if the ref value (equal) to stencil buffer
bind MRT colour texture
draw full screen quad[/b]

Next, I render the directional lights are a full screen quad…with add blending…

glenable (GL_STENCIL_TEST);
glStencilFunc (GL_EQUAL, $08, $FF); //pixel to be lit are $08
glenable (GL_BLEND);
glblendequation (GL_FUNC_ADD);
glblendfunc (GL_ONE, GL_ONE);
glcolor4f (1,1,1,0.7);
bind all three MRT textures …
enable DirectionalLight shader…
set light uniform params…
draw full screen quad…
disable shader…
unbind framebuffer…

The problem, however, is starting with the fixed function texture copy with stencil enabled.
Disabling the stencil - then I don’t have any stippling artifacts.

NULL_PTR: Can I clarify - are you using depth_stencil texture as part of the framebuffer (or are you using stencil and depth render buffers)?

Considering how all this code works very well on the Geforce 8 - I can’t see how it can be wrong on the Radeon. I’ve also placed in my code extensive checking with glGetError and check at every phase of the rendering (GBuffer, Lighting, Refraction, Reflection, Transparency, postprocess). No errors.

Looking at my Depth_Stencil texture creation code… it looks like I use:


and use GL_LINEAR for min and mag filtering.

Any suggestions with stencil or stencil buffer creation?

I do use depth\stencil render buffer instead of a texture to bind to FBO.

Now I’ve replaced render buffer with texture and I can see some artifacts, similar to what you’ve described.

…I thought so. Now I’ll have to rearchitect the engine so that shared depth_stencil can be buffers as well as a texture. Oh hum.

NULL_PTR: Do you use the same bufferID when creating the depth buffer and stencil or are you using separate bufferID?
Here’s my code - but this is not tested w.r.t stencil.

[b]glGenRenderbuffers( 1, @FrameBuffers[bufferID].FBDepth ); // allocate shared Depth render target.
glBindRenderbuffer( GL_RENDERBUFFER, FrameBuffers[bufferID].FBDepth );
glRenderbufferStorage( GL_RENDERBUFFER, Template.DepthFormat, FBWidth , FBHeight );
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_DEPTH_ATTACHMENT,GL_RENDERBUFFER, FrameBuffers[bufferID].FBDepth);

glGenRenderbuffers( 1, @FrameBuffers[bufferID].FBstencil ); // allocate shared stencil render target.
glBindRenderbuffer(GL_RENDERBUFFER, FrameBuffers[bufferID].FBstencil);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX, FBwidth , FBheight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,GL_STENCIL_ATTACHMENT,GL_RENDERBUFFER, FrameBuffers[bufferID].FBstencil);[/b]

I’d appreciate your comments as the stencil code may be wrong.

Here’s my code that yields correct results:
[b]//Render buffer creation
glGenRenderbuffers(1, &depthStencil);
glBindRenderbuffer(GL_RENDERBUFFER, depthStencil);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8_EXT, width, height);
glBindRenderbuffer(GL_RENDERBUFFER, 0);

// FBO setup
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthStencil);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthStencil);[/b]

I have tried also depth_stencil renderbuffer instead of texture.
After geometry pass I copy using glBlitFramebuffer from these renderbuffers depth value to texture so I can sample values out of it in lighting pass. But it didn’t help - I still see bad artifacts. I think problem is in reading value out of depth_stencil texture not rendering into it. Because geometry pass yields correct results - depth is not corrupted.

Looks like separate render buffers are the way to go on ATI…thanks Null_PTR.

martinsm: I’ve had issues with framebufferBlit - I can’t get it to do anything. No errors - just no copying! yet it seems intuative enough. Have you an example you can post for me ?

Blitting works fine for me on ATI.

glBindFramebufferEXT(GL_READ_FRAMEBUFFER_EXT, fbo_handle_to_read_from);
glReadBuffer(GL_COLOR_ATTACHMENT0_EXT); // from which attachment read
glBindFramebufferEXT(GL_DRAW_FRAMEBUFFER_EXT, fbo_handle_to_write_to);
glDrawBuffer(GL_COLOR_ATTACHMENT0_EXT); // to which attachment write
glBlitFramebufferEXT(0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST);
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

Note that you can use 0 as fbo_handle for bindFramebufferExt to indicate that you want to read/write to window system provided framebuffer.

I’ve re-coded the engine to pass a shared depthstencil ID between all sections of code that need access to the main scene’s depth buffer; now the problem has ‘gone away’. Looks like ATI are not properly supporting depth_stencil textures. Thanks guys for your help.

Has anyone reported this as a bug to ATI ?

I’m pretty sure my code is Blitting between FBOs properly, but now I’ve seen your sample I’ll recheck carefully.

Yes, somebody here wrote that he reported this bug to ATI already some months ago.

I’ve got the Depth_Stencil working on nVidia and ATI now via the use of render buffers instead of depth_stencil texture.

A long standing goal was to ‘view’ the stencil buffer so I could check it’s contents. However, I can’t seam to get the following to work…

After binding the MRT framebuffer and rendering to the GBuffer…
gldrawbuffer (GL_COLOR_ATTCHMENT0_EXT); //texture to recieve depth
glcopyPixles (0,0,with,height,GL_DEPTH_STENCIL_TO_RGBA_NV)
unbind framebuffer.

How does copy pixels work? The spec just says it copies portions of the colour/depth buffer to another area of the framebuffer.
I want to copy the depth to a texture attachemnt in an FBO.

Why is my texture just black ?
Anyone used the nVidia extension before ?

CopyPixels requires setting the rasterposition - which requires a 2D projection matrix setup.

A demo of the depth_stenicl issue has been sent to
so hopefully a fixed driver will be released soon!

Let’s hope so!

I’m getting the same artifacts when blitting a plain old 24bit depth buffer from a renderbuffer to a texture (the same problem, with or without stencil). A 16bit depth buffer works fine (but is too low resolution). A 32bit depth buffer causes an invalid operation.

Edit: Rendering directly to the texture (no blitting) works fine - but this way you don’t get any antialiasing…