I’ve written an OpenGL1.4+Cg (NV30 emu) demo that implements soft shadows using occluder distance to enhance/attenuate the shadows and multiple texture samples to generate soft (penumbra type) edges. I’m using a shadow map (captured in a floating point buffer) to grab the back-face polys from the light POV.
If a point is occluded by more than one surface, the shadows are wrong since the shadow map only captures the occluder nearest the light.
I’m need to add depth peeling to generate a deep shadow map. This will allow me to capture the first 4 back-face occluders, which should be enough for me.
The method I propose to capture the 4 depth layers is:
Create a single RGBAx16 (+depth) floating point pbuffer. Then from Light POV (create map):
Render to pbuffer
Update OUT.R with fragment depth
Copytex the result and bind it as input to pass 2.
Copy IN.RGB to OUT.GBA
If fragment depth < IN.R
OUT.R = Frag Depth
CopyTex result and bind as input to pass 3
Repeat as Pass Three and Four.
At the end of 4 passes the RGBA contains
depth4, depth3, depth2 and depth1 values which I can use in my final FP to determine closest occluder depth.
Does this sound sane ?
PS I just noticed that ATI have their Mojo day presentations up at http://www.ati.com/developer/mojo.html
There’s an outline of improved shadow mapping using multiple texture samples for percentage closer filtering in the Image Based Rendering pdf.
[This message has been edited by pocketmoon (edited 10-03-2002).]