Deferred Shading Stencil Tests

I want to only let those pixels be shaded which are in the bounding volume of my light. To do that I’m planing on using a stencil test.

As suggestet in the leadwerks engine article im rendering the depth into an depth buffer texture with a fbo. If I want to compare the volume with the current depth I need to bind the depth buffer texture to the fbo depth buffer. If I want to do a deferred lighting pass I have to bind it to the shader as texture. The problem now is, that this is not possible simultaniously.

What should I do? Copy the depth buffer texture into another depth buffer texture? Or first bind the depth buffer texture as depth buffer and then to the shader? Which would result in 2 fbo changes per light. Or is it somehow possible to set the depth buffer as read-only and use it for the shader and depth buffer simultaniously?

Does someone know how they do it in the leadwerks engine?

Actyually you don’t need to compare smth with rendered depth.
If you choose to use stencil test - so each light rendering is done in 2 consecutive steps.
Firstly, you render your light volume with two-sided stencil and correct stencil functions (just as in shadows). Don’t forget to switch color writes off, cause you don’t need any fragment processing besides stencil test.
So, you have some region on screen with non-zero stencil mask. You just draw your volume’s front faces (or it’s screen bound) with lighting pass shader.
But I’d better advice using depth bounds test. Afaik, it is only supported on nVidia, but it’s much easier to set up and use, you don’t have to draw your geometry twice and you don’t have to clear stencil mask between consecutive lights.

The problem now is, that this is not possible simultaniously.

Why? I’m doing this in my deferred renderer. The GBuffer-depth buffer is also bound to the lightacumulation-FBO as depth attachment, in order to support stenciling and depth bounds testing.

So, while I’m rendering into the FBO (depth- and stencilwrites off, depth- and stenciltest on), I read from the same depthbuffer as texture n order to compute the pixel position.

Oh, yeah, I forgot to mention that, what skynet said, about binding depth texture to both of the FBOs, and using it also as a texture for look-ups (with write states appropriately disabled).

Yes, as skynet said earlier, you should simply disable depth mask for this depth FBO, so, according to FBO spec, it is no bad you use same texture as depth and for lookups, because you don’t modify it, just read.

  1. Render to your gbuffer (color, normal, and depth).
  2. Render lights to another FBO (color & depth). Copy the depth component from the gbuffer to this next buffer (either an FBO or the back buffer). Depth blitting is the easiest, but it won’t work on ATI hardware, so just draw a fullscreen quad, read the depth value from the gbuffer depth texture, and write it out with gl_FragDepth.

The main idea is you cannot read from and write to a texture at the same time, so you can’t share the depth texture across both buffers.

In the article about STALKER in GPU Gems 2, they talk about some other approaches, but I didn’t really understand it.

Leadwerks: why do you want to modify depth values in depth texture in light rendering pass?
I see no reason to copy depth values from pass1 to pass2. Just simply use same depth texture as FBO attachment in both passes as already skynet and Jackis said. Is is completly legal to use depth texture as depth attachment and also as sampler in fragment shader as long as you have turned off depth writes.

Doesn’t work on ATI (results in garbage on screen). I’ve sent them a bug report few months ago.

Ah yes, now I remember - I also have seen that garbage. If you use depth24, depth24stencil8 format, or depth32f_stencil8, then it doesn’t work.
But it works fine if you use depth32, depth32f texture internal format - at least on HD2400, 32-bit Vista.

I see no reason to copy depth values from pass1 to pass2. Just simply use same depth texture as FBO attachment in both passes as already skynet and Jackis said.

I think you need depth writing enabled, or something like that. I’m sure there was a good reason, if I did it. I got it to work on ATI cards, so that is good enough reason to make it correct!

Well in my deferred shading code I see no use and need to enable depth writes in lighting passes. In my code lighting only calculates final color of pixels, it has no need to change depth value - calculating and writing proper depth value is responsibility of geometry pass.

It would be interesting to know why you have such need. Is that for some special effect or is it some kind of optimization?

Probably had to do with ATI not working and needing to keep my buffers separate.