I’ve been trying to implement depth peeling, but haven’t been able to get it to work. All my peels contain a complete rendering of the scene, which means that the secondary depth test (the shadowmap-based one) doesn’t actually discard any fragments.
As far as I can tell, my code is exactly like the pseudocode in Cass Everitt’s whitepaper. I have my secondary Z-buffer in a texture, with the comparison function set to GL_GREATER, and the comparison result is written to alpha. Alpha testing is disabled for the first peel and (GL_GREATER, 0.5) for all other peels. Depth testing is always enabled and my depthfunc is GL_LESS. I clear the color and Z buffer at the beginning of each pass and copy the Z buffer into the depth texture at the end of each pass.
The only big difference I can see between my code and Cass’ is that I’m not using the depth replace texture shader. I didn’t implement this yet because I was hoping I could avoid it by applying a bit of fudge somehow. Still, I would expect this to cause problems similar to Z fighting, but not to make the depth comparison fail altogether.
Am I really required to use depth replace, or is there something else that I might have overlooked?
On NV2x you really are required to use depth replace so that values match exactly. You can see my code for this here: http://cvs1.nvidia.com/OpenGL/src/research/layerz/layerz.cpp
I have played around with using fudge factors instead of depth replace, but they have always had artifacts. You could probably come up with a technique that successively decreased polygon offset to get reasonably artifact-free results.
Note that on GeForceFX this process is much simpler. You simply do:
TEXC H0, f[WPOS], TEX0, RECT;
MOV o[COLH], f[COL0];
You can replace the “MOV o[COLH], f[COL0];” with whatever complex shader you like. You use 1 texture unit to do the peeling, but you’re otherwise unconstrained. Much simpler! No more of those annoying texture matrix and texgen setups.
If anybody wants the actual source to GeForceFX version of depth peeling, I’ll be happy to provide it offline (it will be publicly available before too long, I’m sure).
glAlphaFunc(GL_GREATER, 0); ?
Cass, thanks for the clarification. I’ll go ahead and add the depth replace shader then. It’s true that things will be much easier on GeForceFX, but the reason I brought this up in the first place is because I wanted to keep my hardware requirements down for once
Pocketmoon, changing the reference value wouldn’t make a difference, because alpha contains the binary result of the shadow map comparison function, so it’s always 0 or 1.
In the order-independent-transparency demo, I still need to use alpha for blending, so I can’t throw away fragments unless their alpha==0.
(Not that this is necessarily relevant for your particular use of depth peeling.)
Also, if you’re trying to keep hardware requirements down, I would do some research on avoiding depth replace. I bet you can get the artifacts down to “acceptable” levels and still leave yourself some room for interesting shading.