Dynamic soft shadows

i think this method is similar to the “depth of field” demo form the NVidia SDK.
The shadow is drawn once using stencil volumes into a texture which uses automatic mipmap generation. Then two different LOD levels of this textures are blended together… according to the distance of the pixel to the lightsource (ie. written into the alpha part of the framebuffer)

is this correct ?

if yes, how do you plan to avoid this “melting” problems ? this was the problem why i stopped my researches on this method…

Here is a demo of my technique…
www.tooltech-software.com

Ok. Its a bit slow but some day I will get my algo running faster :wink:

/Anders

Adrian is the closest to the method so far Although i’m not using automatic mipmap generation.

To avoid the “melting” problems, my idea is to have the sharpness of the shadow based on the distance to the occluder, then fixed by a factor based on the distance to the viewer. This doesn’t remove the melting problem, but greatly helps to reduce it.

Still working on the demo with a complex scene. I hope to get it finished for the next week end… be patient

PS.: ToolTech, trying your demo now…

PPS.: Ok, tried your demo. I don’t think it works as expected on Radeons. It looks like a standard shadow volume (shadows still look sharp) which is just a bit darkened compared to lit areas. A color comparison gave me an average value of gray 176 for lit areas and 173 for darkened areas. In addition, overlapping shadows are darker than non-overlapping shadows (value of 170). I’ll try it on a GF4 when i’ll have some time. Performance was ok. How many tris are there in your scene ?

Y.

[This message has been edited by Ysaneya (edited 09-05-2002).]

Nakoruru,
I goofed up
I was thinking if I can get the alpha values in the shadow region then i could transform the alpha… but now i guess it isnt easy to get that in the first place and still I think the min or max of alpha has to be found out.

The HW must support vertex weights extension. Otherwize it will fallback on a plain shadow volume implementation.

tarantula,

I figured that is what you where trying to do. Ysaneya, how do you calculate the distance between the shadowed surface and the occluder’s silouette?

One idea off the top of my head is to project a depth map (like in shadow mapping) into the scene and calculate the difference between the occluder depth (from the shadow map) and occludee depth from the light (perhaps from a 1d texture) and store it in the alpha.

The the difference between the occluder and the occludee depth from the light, not the absolute distance from the light that is needed to calculate how blurry a shadow should be.

The bigger the difference, the bigger the light appears relative to the occluder, the blurrier the shadow should be.

Actually… i don’t. It was not a typo, i use the distance to the light for now, not to the occluder. But i’m working on it… i’m sure there’s an answer. I’ve also been playing with the idea to store it in the alpha buffer, but haven’t found a good solution… yet. I’m not sure the depth map projection idea will work… if you want point lights, that is.

Y.

Originally posted by Nakoruru:
[b]tarantula,

I figured that is what you where trying to do. Ysaneya, how do you calculate the distance between the shadowed surface and the occluder’s silouette?

One idea off the top of my head is to project a depth map (like in shadow mapping) into the scene and calculate the difference between the occluder depth (from the shadow map) and occludee depth from the light (perhaps from a 1d texture) and store it in the alpha.

The the difference between the occluder and the occludee depth from the light, not the absolute distance from the light that is needed to calculate how blurry a shadow should be.

The bigger the difference, the bigger the light appears relative to the occluder, the blurrier the shadow should be.[/b]

actually, you could do the whole shadowing then… as i know you like shadowmaps, why don’t you setup this?

Are you sure the difference between the occludee and occluder depths from the light decides the blurriness of shadow for any occluder?
What if the occluder is concave towards the side of the occludee? I feel its the silhouette that determines it… so the depth of the silhouette edges should be interpolated and the difference should be between this and the occludee depth.

[This message has been edited by tarantula (edited 09-06-2002).]

OK, so with shadowmaps you have a texture containing a depth map from the lights POV.

When rendering you’re comparing the depth (in light space?) of your pixel with the depth in your shadowmap.

To get soft shadows you can sample your shadowmap more than once and compare the number of shadowed to non-shadowed results, i.e. ALL shadowed = 100%, 3 shadowed = 75%, 50%, 25%, 0%.

Could we do the first sample at the pixel location and then use the actual difference between two depths to determine how far away we jitter our other samples ? Samples closer to the shadow depth should have further samples jittered more.

hmmmm…horrible incorrect.

Oh well, here’s some nice soft shadow papers:
http://www.mpi-sb.mpg.de/~brabec/doc/

[This message has been edited by pocketmoon (edited 09-06-2002).]

Originally posted by tarantula:
[b]Are you sure the difference between the occludee and occluder depths from the light decides the blurriness of shadow for any occluder?
What if the occluder is concave towards the side of the occludee? I feel its the silhouette that determines it… so the depth of the silhouette edges should be interpolated and the difference should be between this and the occludee depth.

[This message has been edited by tarantula (edited 09-06-2002).][/b]

shadows are not blurred at the edges. shadows are blurred everywhere, even in the shadow… think about it, when you cut a mesh in the middle, the shadow in the middle get blurred as well… and yes, it is a function of distance to light and distance from light to nearest occluder shading that pixel, that helps determining that blurrinessfactor.

i know of several inexact parts of this solution, for sure, but it is the way i will implement it into my raytracer, as i only need to trace one ray, and i get those values all anyways, so what? then i do an image space blur depending on those two values per pixel, as well as the screenspace normal and the screenspace depths… hope i’ll get that working nicely…

daveperman,

Yeah, I was thinking that my solution would fit right into a solution that already used shadow maps. It just requires getting the difference and clamping using a fragment program and then storing that difference in alpha and while you store the shadow in the color.

Big problem is that it eliminates one of the major advantages of shadow maps; being able to do more than one light in one pass.

I was wondering about occluder distance versus silouette distance. The reason why occluder distance would work is because if you blur the middle of a shadow it will still likely be completely shadowed because all the pixels samples are shadowed.

I do not think that using the occluder distance or the silouette distance is absolutely correct, but occluder distance is a lot easier.

Originally posted by pocketmoon:
[b]OK, so with shadowmaps you have a texture containing a depth map from the lights POV.

When rendering you’re comparing the depth (in light space?) of your pixel with the depth in your shadowmap.

To get soft shadows you can sample your shadowmap more than once and compare the number of shadowed to non-shadowed results, i.e. ALL shadowed = 100%, 3 shadowed = 75%, 50%, 25%, 0%.

Could we do the first sample at the pixel location and then use the actual difference between two depths to determine how far away we jitter our other samples ? Samples closer to the shadow depth should have further samples jittered more.

hmmmm…horrible incorrect.

Oh well, here’s some nice soft shadow papers:
http://www.mpi-sb.mpg.de/~brabec/doc/

[/b]

What about this :

Get your depth map from light POV, as usual.

Then generate another texture from the depth map using multi-sample edge detection (depth texture is bound to multiple input textures and a filter kernal applied …) This gives you an fake-penumbra map.

Now do the standard ‘am I in shadow’ SGI extension thingy but also use a penumbra map lookup to soften the shadow. Big edge values (large changes in depth) should equate to softer shadows.

Who was recently talking about generating shadow volume geometry by reading back a depth map and doing edge detection on the cpu ??

R.

[This message has been edited by pocketmoon (edited 09-06-2002).]

Originally posted by Nakoruru:
[b]daveperman,

Yeah, I was thinking that my solution would fit right into a solution that already used shadow maps. It just requires getting the difference and clamping using a fragment program and then storing that difference in alpha and while you store the shadow in the color.

Big problem is that it eliminates one of the major advantages of shadow maps; being able to do more than one light in one pass.

I was wondering about occluder distance versus silouette distance. The reason why occluder distance would work is because if you blur the middle of a shadow it will still likely be completely shadowed because all the pixels samples are shadowed.

I do not think that using the occluder distance or the silouette distance is absolutely correct, but occluder distance is a lot easier.[/b]

why does it hurt rendering more than one light in one pass. how does this work at all btw? dunno (at least, on gf2mx i don’t think it changes much )

it surely is a little more complex than doing hard shadows, but it adds much to the overall result, so what? and i think its faster than multisampling on the shadowmap as well…

yeah, its not 100% correct, but a rather statistical approach wich has only one thing to be: fast and simple. i think it looks great for its simplicity… and, it will be perfect for raytracing… on shadowmaps, doing multiple samples and multiple tests (wich is not perfectly accurate as well, but you will not see the difference in 99.9999% of the time )is quite fast…

If you use blurring then using the occluder distance should be fine. But for a different approach it might not work at all.

daveperman,

The way I was thinking about the problem required multiple passes, but now I see the better solution.

You could do multiple shadow map comparisons from different places on the shadow map. The total area of these samples and would be determined by the distance from the light (in this case, distance from the light is correct).

You would either add these up or average them depending on how you wanted to math to work.

This would be more efficient than rendering the shadow map multiple times from different locations, but of course, it is not as accurate.

If you think rendering the stencel shadow or shadow map multiple times as blurring the light source, this method would be like blurring the shadow casters.

In other words, keeping the light in the same place while moving the world. The problem of course is that these should be completely equivalent, but because it does not actually render the world again, it loses any parallax effects. However, it should work well because the number of samples and the area of the samples is varied per pixel. I think it would look good unless the light is very big or very close to the shadow.

I probably need to break out paintbrush and draw this idea out, because I probably did a horrible job of explaining it.

Hi all,

I have uploaded a few new images of my current build. Please forgive my poor modelling skills

A stupid room that i have modelled

A close-up view to one of the shadows

The scene is a single whole model. Notice the self-shadowing on the tonnels.

Still trying to figure out how to fix the visual artifacts. I also have a strange shadow color popup (sometimes it becomes less dark… humm). Busy week-end in perspective

Y.

[This message has been edited by Ysaneya (edited 09-06-2002).]

Originally posted by Nakoruru:
I probably need to break out paintbrush and draw this idea out, because I probably did a horrible job of explaining it.

uhm yeah… i think that would help much you did a bad job this time or is it simply the same problem of not using the same vocabulary for the same things? cheers…

to Ysaneya: those images look cool

What hardware are you using?

I hope you will take this as contructive criticism, because these shadows look pretty good (keep in mind that some people on this board seemed to complain very hard about the shadows in Doom 3).

Some of the barrels are not casting shadows, and the shadow of the one on the left does not seem much sharper near the base as it does on the wall.

Looking a trash can in my office which has very soft shadows due to big flourecent lights… The base of the shadow of the trash can is extremely sharp, but then gets fuzzy very fast.

If you zoom in, can you see the shadows get sharper towards to base of the barrels?

Everyone here, includign me, seems anxious for you to get an example we can see for ourselves going this weekend. Good luck!

Ysaneya,

your FPS looks quite interesting. You faked it or what?

What is the scene complexity, # of passes?

great stuff!
V-man