Depth Peeling, Deep Shadow Maps (NV30emu)

I’ve written an OpenGL1.4+Cg (NV30 emu) demo that implements soft shadows using occluder distance to enhance/attenuate the shadows and multiple texture samples to generate soft (penumbra type) edges. I’m using a shadow map (captured in a floating point buffer) to grab the back-face polys from the light POV.
http://www.baddoggames.com/cg/SHAD1.JPG http://www.baddoggames.com/cg/SHAD2.JPG

If a point is occluded by more than one surface, the shadows are wrong since the shadow map only captures the occluder nearest the light.

I’m need to add depth peeling to generate a deep shadow map. This will allow me to capture the first 4 back-face occluders, which should be enough for me.

The method I propose to capture the 4 depth layers is:

Create a single RGBAx16 (+depth) floating point pbuffer. Then from Light POV (create map):

Pass One:
Render to pbuffer
Update OUT.R with fragment depth
Copytex the result and bind it as input to pass 2.

Pass Two:
Copy IN.RGB to OUT.GBA
If fragment depth < IN.R
OUT.R = Frag Depth
Else
discard
CopyTex result and bind as input to pass 3

Repeat as Pass Three and Four.
At the end of 4 passes the RGBA contains
depth4, depth3, depth2 and depth1 values which I can use in my final FP to determine closest occluder depth.

Does this sound sane ?

Rob.

PS I just noticed that ATI have their Mojo day presentations up at http://www.ati.com/developer/mojo.html

There’s an outline of improved shadow mapping using multiple texture samples for percentage closer filtering in the Image Based Rendering pdf.

[This message has been edited by pocketmoon (edited 10-03-2002).]

These look most like distance attenuated shadows, not soft shadows…

What are you doing with your occulder distance? It looks like you are just using it to fade the shadow, which is kinda weird, and does not make any sense to me (if you are trying to be realistic).

You should actually just be using the distance from the light to the surface you are rendering to scale the sample kernel used to do percentage closer filtering. The bigger the light appears from the surface, the bigger the sample kernel.

In otherwords, take multiple shadow test samples and average them. Scale the size of the sample distribution using the distance from the surface to the light.

Back or front facing polygons and depth peeling seem to have little to do with soft shadows, unless you are doing deep shadow maps (for volumetric shadow casters).

[This message has been edited by Nakoruru (edited 10-03-2002).]

Originally posted by Nakoruru:
[b]These look most like distance attenuated shadows, not soft shadows…{/QUOTE]

And distance attenuation makes shadow what exactly ? Hard ? Holey ? Dappled ? I would suggest soft is a good term.

[QUOTE]What are you doing with your occulder distance? It looks like you are just using it to fade the shadow, which is kinda weird, and does not make any sense to me (if you are trying to be realistic).

Put your hand just above a desk, or table, or whetever. There is a shadow. Now move you hand towards the light. You tell me what happens to the shadow (both its outline and it’s depth) and why. Notice the light didn’t move.

You should actually just be using the distance from the light to the surface you are rendering to scale the sample kernel used to do percentage closer filtering. The bigger the light appears from the surface, the bigger the sample kernel.

Agreed. On the same desk place a tall object. Look closely at the shadow and come back and tell me again that distance to occluder doesn’t matter. Again nobody moved your light.

In otherwords, take multiple shadow test samples and average them. Scale the size of the sample distribution using the distance from the surface to the light.
No.

Back or front facing polygons and depth peeling seem to have little to do with soft shadows, unless you are doing deep shadow maps (for volumetric shadow casters).

Sigh. Back and Front face polys are CERTAINLY important! Go look up second-depth shadow mapping.

Rob.

pocketmoon, your shadows are not soft. look at them. the edges are still sharp. the shadow simply does not have the same darkness all over its area.

depth peeling, deep shadow maps don’t have to do anything with soft shadows. they come from the fact that lights have a size. depth peeling and deep shadow maps are for not fully solid occluders, for “transparent shadows” wich i don’t see at all in your image. so they are utterly useless.

softshadows depend on the ratio of the distance from the light to the occluding surfaces, the size/area of the light, and the distance from the light to the actual shadowing surface.

not that your images don’t look cool (at least the first one, the second one looks rather buggy). but there are no soft shadows on the image, and no deep shadows nor depth peeling is visible or used or useful in the scene.

and you don’t even need an nv30 for this as well

anyways, continue your work.

btw, i prefer your forest images. they look too cool

that’s it. I quit. I’ll go and look for the demo thats does percentage closer filtering on current hardware. Oh look… I you’re right! I DONT need an NV30, I need a R300! Silly me.

depth peeling, deep shadow maps don’t have to do anything with soft shadows. they come from the fact that lights have a size. depth peeling and deep shadow maps are for not fully solid occluders, for “transparent shadows” wich i don’t see at all in your image. so they are utterly useless.

OK with that in mind read your next bit.

softshadows depend on the ratio of the distance from the light to the occluding surfaces, the size/area of the light, and the distance from the light to the actual shadowing surface.

OCCLUDING SURFACE.

not that your images don’t look cool (at least the first one, the second one looks rather buggy

YOU WIN TODAYS STAR PRIZE! The BUG is that I need multiple occluders, just storing the one nearest the light is NOT enough.

I shall carry on. The screen shots are NOT complete,the demo I have here does show penumbra shadows. I turned the effect down for the initial release because it looked bad with the OCCLUDER problem which can’t be fixed without DEPTH PEELING and DEEP SHADOW MAPS.

But as, always, I’m open to suggestions.

Rob.

[This message has been edited by pocketmoon (edited 10-03-2002).]

hehe, capslock-coders

well yeah. if you want to do it absolutely correctly, you would need deep shadow maps. depth peeling is for order independent transparency useful, and i don’t see that much use for shadowmaps, but i’ll check it anyways.
penumbras are actually what softshadows make soft. the uhm… soft shadow

if you want to do it entierly correctly, you have to render multiple depthmaps… for stuff in between, you don’t need all that fancy deep shadowmapping and all that to get a fake penumbra working more accurate. in your scene at least you would not see a visual difference. we discussed yet a lot about how to do softshadows in the… “soft shadows” 1 and 2 threads, so i don’t think i need to repeat it here again. the deep map peeling idea you have can be useful for doing what we explained in those threads. but they are not primary key to get it working. they are there to possible improve the quality…
anyways, go on with your work, its by far the best i’ve seen for the contest. and some of the best cg stuff as well… if you get all you want working, i bet you win. my best wishes. but only if you stop beeing a capslockcoder there is no need to feel offended, i did not planed to attack you.

anyways, /me thinks deep shadow maps are useful for doing translucent objects, volumetric ones… like clouds for example. clouds throwing real shadows real time would be cool as well (and actually your shadows look a bit like thrown from volumetric fog-objects…)

and btw, the “soft shadows” thread implements it for radeon8500. and it sure is possible to get it working on gf3 and 4 as well.

>>I DONT need an NV30, I need a R300! Silly me.<<
explain where you need nv30 (except that you can win in the contest to get one )

you know, i would be more interested in an aproach using GL_ARB_fragment_program… as i could run the demo then as well if i dont buy an nvidia gpu.

> I’ll go and look for the demo thats does
> percentage closer filtering on current
> hardware.

Both GF3 and GF4 do percentage-closer filtering.

R300 does NOT do it unless you code it up in a shader, AFAICT. Also, it doesn’t actually expose the ARB_fragment_shader extension yet. Boo.

Or were you applying TWO levels of sarcasm here? It all gets so confusing.

Both GF3 and GF4 do percentage-closer filtering.

By binding the map n times ? Could I also implememt the occluder distance effects I’m after as well in GF3/4. I asked a straightforward question about depth peeling and deep shadow maps and everyone’s telling I don’t need them when I know I do

Or were you applying TWO levels of sarcasm here?

At least. But I was applying them correctly

[This message has been edited by pocketmoon (edited 10-03-2002).]

Sorry, I did not remember what I was planning to try for shadow mapping to do a good enough job of explaining it.

Basically, what I plan on doing is creating a focus factor which should end up looking something like this.

focus = light_size * (1 - (distance_between_light_and_occluder / distance_between_light_and_surface))

In hard-shadow mapping you would take a single sample. I think I will try to take 5 samples from the shadow map in a quincunx configuration instead. The size of the sample pattern will be scaled by ‘focus’, centered around what would be the single sample in hard-shadow mapping.

This seems to behave properly. If the surface and the occluder are very close then ‘focus’ will get very close to zero and the samples will be very close together, resulting in a sharp shadow. If the occluder is very close to the light then the size of the pattern will approach the ‘light_size’ which is an artist tuned value which can be used to focus the light (basically, make it bigger or smaller, and therefore make the shadows harder or softer).

I am not exactly sure how I can combine my samples. Because I they are results, not depth values, it seems that I could average them, but with only 5 samples I would only end up with 6 levels of shadow (0, 1/5, 2/5, 3/5, 4/5, and 1).

I know that this method will not be as accurate as redrawing the shadow map from 5 different perspectives, but I think it should be very close.

pocketmoon:
GF3 and GF4 have percentage-closer filtering built in to the ARB_shadow/ARB_depth_texture extensions AFAIK.

i think you just enable GL_LINEAR filtering on the depth texture. the ARB_shadow spec says that the calculation of the comparison result is implementation-dependent in these circumstances and may involve comparison of several depth-texture values.

[This message has been edited by vshader (edited 10-04-2002).]

i’m currently working at a softshadow implementation on gf2, hehe… with shadowmaps, of course (nakoruku likes it, so i do it )

>>Or were you applying TWO levels of sarcasm here? It all gets so confusing.<<
well. if i only knew…

Originally posted by vshader:
GF3 and GF4 have percentage-closer filtering built in to the ARB_shadow/ARB_depth_texture extensions AFAIK.

I know you are correct, The Lux Jnr demo that was shown on a GF3 used PCF. But do I get control over the filter kernel size that I need for soft shadow ?

Rob

daveperman, are you being sarcastic :slight_smile:

Wouldn’t two levels of sarcasm not be sarcastic anymore? Like a double negative.

No, I think sarcasm is accumulative.

actually, the thing with the shadowmap is not sarcastic, nako…

for the rest. sarcasm cummulated. i think i have a triblesarcasm here in in fact.

oh, and i think i nocked down the 16bit shadowmap technology for gf1/gf2 from 3 passes to 1 pass, we’ll see (that means cubic shadowmaps in high precision possible in 6 passes generating and 6 passes to shadow… quite cool bether than 6 + 18 )

Sarcasm is only accumulative if you set up your sarcasm blending function properly, you may have to multipass if you only have a limited number of hardware sarcasm units, and you want to use more than that on a single person.

I know, I suck.

-Mezz

daveperman, I didn’t think anyone was paying any attention to me, I’m just an arm-chair graphics programmer until I can get time together to make a set of graphics demos for a resume :slight_smile:

Mez, I though it was funny until I read ‘hardware sarcasm units,’ then I was ROTFL.

As for the original on-topic problem, there is no way to make pocketmoon’s shadows look right using his method. The problem is not really occluder distance, it is how much of the light is obscured. This is not strictly related to positions of the light/occluder/surface. What I mean by strictly related is that there is no continous function shadow(light_distance, occluder_distance, surface_distance) which will be anything by a rough guess of how much of the light source is actually occluded.

You can see why this has nothing to do with back or front facing occluder faces, because how much of the light is obscured is determined by the silouette of the occluder.

To get a physically accurate answer you have to integrate over the entire light source, doing the intensity and a shadowed/notshadowed determination for every point on the light from the point of view of the surface (discarding any shadowed contributions). Of course, we cannot do a true integration, so we have to settle for multi-sampling.

Any method that does not multi-sample is a fake-soft-shadow technique. Volume/Area light intensity at a point is not a function where you can plug in some variables and get an answer. Like I said before, you cannot just plug in the positions of the light, occluder, and surface and get an answer. You have to integrate the lighting equation over all the points on the light.

One really slow, but extremely accurate, way to do that would be to render the scene from the point of view of the surface looking at the light with a FOV such that the light is always the same size. Use NV_occlusion_query to count how many fragments of the light where rendered, then multiply that by the intensity of each pixel (which would be the light intensity divided by the surface area in pixels of the light facing surface).

You can see from this that a simple function could never capture this because there is no way to predict from the positions of the objects how much of the light is actually occluded. You have to actually look and see.

The method I outlined above is a rough idea of how I want to try and integrate the lighting equation. Essentially I plan to take multiple samples from the shadow map in the area where the light rays from the light source shine through in order to light the surface. Some will be shadowed, some may not, the result will be soft shadows (I hope).

One thing I try and account for is the effect of ambient light on the shadow, i.e. very close to the occluder the shadowed area sees more of the back-facing occluder than the ambient environment and so is much darker. Using back face occluders with depth maps also helps reduce shadow acne.

pocket,

In a perfect world (with perfect precision) back or front facing polygons from the light would not make any difference (at least in the algorithm I am designing). But, I can see how using back facing polygons would help a lot with shadow map artifacts. Good tip!

Back and front faces should produce the same silouette, so which one I choose would be arbitrary without a deciding factor like artifact reduction or other considerations.

Wow, thinking about it, it seems like using light view back facers would completely remove the ability for polygons to inappropriately shadow themselves. It is so obvious that I wonder why I have not heard about it until now… (Of course, that neglects the fact that I should have realized it myself ^_^).

EDIT: I was looking at your screenshots again, and wonder if the reason your shadow edges where so clean is because you are using back facing surfaces or because you are using a floating point buffer. Maybe you are just using a high resolution shadow map…

[This message has been edited by Nakoruru (edited 10-04-2002).]

Originally posted by Nakoruru:
I was looking at your screenshots again, and wonder if the reason your shadow edges where so clean is because you are using back facing surfaces or because you are using a floating point buffer. Maybe you are just using a high resolution shadow map…

That’s the percentage closer filter from 9 adjacent samples