Second depth depth buffer

Support HW-second-depth buffers. Are very good for shadowmaps biasing problems and also for CSG. I bet will rock and will be very easy to implement in hardware.

Just a quick post to show support for this (or to at least foster discussion). It would also be useful for translucent surfaces and modifier volumes.

And what is preventing you from using glslang tools to implement multiple depth buffers?

Originally posted by Korval:
And what is preventing you from using glslang tools to implement multiple depth buffers?
Well atm I am trying to perform double-depth using GLSL. The problem is that I need it in a cube map FBO texture that is being rendered. When I do the

   textureCube(cubeTex,dir); //FBO is cubeTex too

I am getting a thrash value becuase the FBO is not complete ( only the current face is active )… Ofc I could do this with multiple passes but that will kill performance.

Anybody have a second-depth running in a shadow cubemap to avoid biasing problems in ONE pass using GLSL, please?

Originally posted by Korval:
And what is preventing you from using glslang tools to implement multiple depth buffers?
You could use FBOs to write to textures and fragment shaders to read from textures, but you can’t read and write to the same buffer (which is effectively what the depth test does).

This is the relevant section of the FBO extension specification:

4.4.3  Rendering When an Image of a Bound Texture Object is Also
Attached to the Framebuffer
Special precautions need to be taken to avoid attaching a texture
image to the currently bound framebuffer while the texture object is
currently bound and enabled for texturing.  Doing so could lead to
the creation of a "feedback loop" between the writing of pixels by
the GL's rendering operations and the simultaneous reading of those
same pixels when used as texels in the currently bound texture.  In
this scenario, the framebuffer will be considered framebuffer
complete (see section 4.4.4), but the values of fragments rendered
while in this state will be undefined.  The values of texture
samples may be undefined as well, as described in section 3.8.8.  

The closest that you can get is to use multi-pass techniques.

Anybody have a second-depth running in a shadow cubemap to avoid biasing problems in ONE pass using GLSL, please?
Two depth buffers aren’t even going to help you with this.

You could use FBOs to write to textures and fragment shaders to read from textures, but you can’t read and write to the same buffer
But you don’t read and write to/from the depth buffer. The system does the reading, comparison, and writing for you.

I understand what you’re getting at, and it could be useful. But it would require actually reading the depth value from the depth buffer(s) into the shader and then doing something with it. And then deciding which depth buffer gets written into if a depth test fails.

Originally posted by Korval:

But you don’t read and write to/from the depth buffer. The system does the reading, comparison, and writing for you.

Yes. However, if you were to emulate a (second) depth buffer using fragment shaders, you would need to perform this reading, comparison, and writing to/from a texture (not the system depth buffer).

Originally posted by Korval:

I understand what you’re getting at, and it could be useful. But it would require actually reading the depth value from the depth buffer(s) into the shader and then doing something with it. And then deciding which depth buffer gets written into if a depth test fails.

I thought that was my point(?); That you can’t currently emulate a depth buffer (similar to the system depth buffer) because of the read/write issues. This was in response to:

Originally posted by Korval:

And what is preventing you from using glslang tools to implement multiple depth buffers?

It’s not reasonable to discard this suggestion (to add a second depth buffer to GL) citing that you can use GLSL to implement multiple depth buffers (because you can’t emulate a fully functional depth buffer using GLSL).

I thought that was my point(?); That you can’t currently emulate a depth buffer (similar to the system depth buffer) because of the read/write issues.
I guess I didn’t fully make the point:

There will be no reading from buffers into the fragment shader. Possibly not ever. So, any technique that would require such a thing loses by default.

Originally posted by Korval:
[b]I guess I didn’t fully make the point:

There will be no reading from buffers into the fragment shader. Possibly not ever. So, any technique that would require such a thing loses by default. [/b]
That’s exactly what I said in my original post. I even made a point that the closest you can get is to use multi-pass techniques.

You asked “what is preventing you from using glslang tools to implement multiple depth buffers?” and I pointed this out in response (to show that it’s not possible). If you were suggesting that one can use GLSL to implement multiple depth buffers, I would like to know the approach you would use.

Korval just likes to argue. I like it too, but only if I’m really bored and I know I can win. On monday nights, nothing beats a vigorous debate, pointless as it may be :slight_smile:

There isn’t even support for stencil in shaders unless if SM4 is finally going to allow this.
DDB would be nice. Could be used for OIT.

I am getting a thrash value becuase the FBO is not complete ( only the current face is active )… Ofc I could do this with multiple passes but that will kill performance.
You are using a FBO that is not complete?

Originally posted by V-man:
You are using a FBO that is not complete?
Nope. I was trying to save depth + 2nd depth using a floating point texture but when I try to do the texture lookup using the current-in-use-FBO I got thrash as indicated in the spec 8(

Btw, will be nice to acess too the stencil values from fragment shaders, not only the second depth…

Btw, will be nice to acess too the stencil values from fragment shaders, not only the second depth…
Before you start asking for second depth and stencil in the fragment shader, doesn’t it make sense to get the basics down first? Like, color reads from the fragment shader? And/or first-depth?

Granted, I doubt any of these are going to happen; it’s more likely that we’d have a special “blending shader” stage that’s relatively restricted, but more capable than fixed-function blending/depth test. As suggested in this article.

Also, what’s this going to cost? I think the reason that cool features don’t make it into hardware is simply due to the additional cost, the necessary rework of the architecture, and other issues like power consumption, heat, etc. But obviously we’d all like to see a fast generic solution for transparency. Perhaps there’s another?

Originally posted by Korval:

Granted, I doubt any of these are going to happen; it’s more likely that we’d have a special “blending shader” stage that’s relatively restricted, but more capable than fixed-function blending/depth test. As suggested in this article.

There is very little information in that article. Do you have other links to similar information?

… or do we just have to wait until SIGGRAPH? :frowning:

Just a quick note.

Accessing framebuffer elements in a fragment shader was in the GLSL spec at some point, but was subsequently removed. From the GLSL specification:

  1. Should the fragment shader be allowed to read the current location in the frame buffer?
DISCUSSION: It may be difficult to specify this properly while taking into account multisampling. It also may be quite difficult for hardware implementors to implement this capability, at least with reasonable performance. But this was one of the top two requested items after the original release of the shading language white paper. ISVs continue to tell us that they need this capability, and that it must be high performance.
RESOLUTION: Yes. This is allowed, with strong cautions as to performance impacts.
REOPENED on December 10, 2002. There is too much concern about impact to performance and impracticallity of implementation.
CLOSED on December 10, 2002.

Originally posted by Leghorn:
I think the reason that cool features don’t make it into hardware is simply due to the additional cost, the necessary rework of the architecture, and other issues like power consumption, heat, etc.
Not to mention performance. Not every performance issue can be solved by adding additional parallel pipes. A feedback loop in the pipeline makes unrestricted parallel execution impossible.

Originally posted by Korval:
Before you start asking for second depth and stencil in the fragment shader, doesn’t it make sense to get the basics down first? Like, color reads from the fragment shader? And/or first-depth?

Absolutely yes. But well, I was trying to fit to the post title, hehe

The IDEAL will be that you could access any framebuffer color, depth, stencil and multisampling value from fragment shaders ( fear the performance though huh ), but if I can just access the texels in a non-complete FBO I will be happy ( because I could perform the 2nd depth thing :smiley: )

but if I can just access the texels in a non-complete FBO I will be happy
You should call it in use, not non-complete.
This has technical issues like write before read.

Somehow the blending unit solves it without losing too much performance.

Not only write before read, also things like read before write and write before write, the latter leading to nasty lost update problems. It would either require fragment level synchronisation or a flush of the whole fragment pipe after each triangle rendered.

The blend unit solves these issues simply by being fixed function, it’s easy to make a single read-modify-write sequence atomic. If you make this programmable you get a lot of problems. They are not unsolvable, but solving them costs a lot performance.

If you give fragment program read access to the destination framebuffer (and reading from a texture that is currently bound to an FBO is exactly that), you’ll get the same problems at a much more critical stage in the pipe. You’d considerably slow down the entire fragment pipe.