Dynamic soft shadows

but it is the way i will implement it into my raytracer, as i only need to trace one ray, and i get those values all anyways, so what?

If you’re going to bother with a ray tracer at all, you should do it right and stocastically trace the area of the light volume (shadows are soft because lights have area. A true point-light wouldn’t cause soft shadows).

I used an Athlon 1.4 with a Radeon 8500. The FPS could be improved a lot… because surprizingly, i’m doing everything in immediate mode with software vertex processing.

All barrels cast shadows, but given the position of the light and the barrels in that shot, they are hidden behind the barrels. However i do agree that the shadow is not that sharp near the base, and it’s because (i already stated that) that i used the distance to the light and not to the occluder. By tweaking some of the parameters you can get sharp shadows near the base, but i’m still trying to find a way to get the correct settings everywhere :frowning:

However, even if i don’t find a way to fix that, i still believe the shadows are pretty nice. The room looks like realtime high-res lightmaps when you move around.

The scene is not very complex (i guess around 2000 tris), and my technic does use very few passes. A grand total of 3 for now: one to fill the Z values; one to draw the shadow volume; then one to display the scene with diffuse textures + shadows.

Y.

Ysaneya,

Yes, I said they do look really good, I was just nit-picking. Studying, trying to figure out what is going on.

Daveperman,

It is a problem of figuring out exactly what words to use and how to explain an idea that really needs figures to describe properly. I’ve come up with a much better description, one that may not even need pictures. I’ll post it after I edit it some more.

You would not like it however because it will only work for shadow mapping :slight_smile:

Originally posted by Korval:
If you’re going to bother with a ray tracer at all, you should do it right and stocastically trace the area of the light volume (shadows are soft because lights have area. A true point-light wouldn’t cause soft shadows).

its for realtime raytracing and i prefer to do some nice pixelshading on the gpu afterwards instead of tracing more rays… so its practically for free…

This is going to be a long post. Its a more coherent (I hope) version of my idea for doing soft shadows on next gen hardware. Of course people will come up with lots of different ways, this is just my idea. Its kinda abstract and general because I wrote it with the idea that I might want to post it with a demo someday.

The usual shadow mapping technique is to first render a scene from the point of view of the light source. Then render the scene from the eye’s point of view while projecting the image produced from the light’s view onto the eye’s view. By projection, it means that the light view image is mapped into the scene as if it had been proejcted from the light source. If you can match a pixel in the eye’s view with a pixel from the light’s view then that means that the light shines on it. That is because only pixels in the lights point of view actually recieve light. If there is a mismatch then that means they are in shadow.

This will produce sharp shadows because the light source is modeled as an infinitely small point. This means that any surface will always block all or none of the light. Now you see it, now you don’t. Soft shadows are produced by real light sources because real light sources can be partially blocked. If from a point on a particular surface, only half a light source is visible, then it only receives half as much light.

One way to model this is to render clusters of point lights. These closely spaced lights will produce multiple shadows which blend together and give a soft effect. The problem is that it greatly increases the amount of work that has to be done proportional to how many lights are in the cluster. 12 lights in a cluster is 12 times the work.

A cheaper way to simulate this effect would be to take the -area- of the scene from the light’s view which maps to the eye’s view and have the shadow result be the percent of that area which is in shadow when compared to the pixel in the eye’s point of view. Taking 12 samples of an area should be cheaper than re-rendering a scene 12 times.

The size of the area being sampled (and the number of samples needed to get a good result) depends on how big the light is from the point of view of the surface being rendered. From the surface being rendered, if you look towards the light, how much of it is being obscured is how much shadow you are in at that point. The closer to the light, the bigger the light will appear (and of course, the bigger the light, the bigger it will appear).

The problem with this approach compared to rendering the scene 12 times is that when the scene is rendered 12 times it is from 12 slightly different points of view. Shadows change shape depending on which angle they are cast from. A profile casts a very different shadow than a shadow cast from the front. However, this should be acceptable as long as the light source is not too big or too close to the shadows because the difference between object sillouettes is not that dramatic.

Also, other approachs which re-render the shadows multiple times do not actually consider multiple viewpoints, but just skew the shadows. Such skewing does not change the shape of the shadow, but only where it is projected. Such methods should be equivalent to this method.

What shape is the sample kernel? Its size is determined by distance from the light to the surface, but what is its shape? The simple answer is circular. This would be the projected shape of a spherical area lightsource. It is easiest because it does not change depending on what vector the light is being cast in. The sample points could simply be evenly distributed in a circle and get good results. Other shapes would require that the 3D cluster of points which define the shape of the light source be projected onto the light view image. Once these points were known, samples could be taken. Very unusually shaped lights could be modeled this way.

That was a pretty abstract explanation. One familiar with shadow mapping might wonder why I did not mention depth values. The fact is that shadow mapping does not rely on depth values, only on finding mismatches between the projected light view and the eye view. Depth values are just the most obvious thing to compare. Other implementations use a single color for each polygon (called index shadow mapping). Mismatches in color result in shadow.

When mapping a spherical area light source, one only needs the distance of the surface to the light to do a good job of approximating the size of the light. This could be gotten by mapping a 1D texture so that it corresponds to how much the filter kernel needs to be scaled.

If one wants to project a cluster of light sources then one needs to be able to determine the 3d coordinates of each light relative to the surface being rendered. One could then project those points onto the shadow map as sample points. It would probably be a fairly involved pixel shader.

I would love to turn all this theory into an implementation. Right now one could implement this on a Radeon 9700 or using nVIDIA’s NV30 emulation. I thought about this theory months ago, but at the time I abandoned it as hopeless until more capable hardware came about.

i would be interested if you could get your shadows working in tenebrae, that would be very cool. nutty then could implement high dynamic color range, and then we beat doom3

Now is our chance to get ahead of Carmack. The Radeon 9700 is out, but he is too busy trying to ship Doom 3 to do any real research. Of course, he seems to have the ability to implement a whole graphics engine over a weekend :slight_smile:

hehe, he does the softshadows over the weekend as well, so what?
can’t wait to get my own r300
lets beat carmack…hehe

Hi

the weekend is over for some days, but no replies…mmmh

I also tried to do some softshadowing, but still having some problems.
I am rendering the depth values of my objects and the volumes into an pbuffer and do my stenciling there. after that i apply this pbuffer as texture over the original scene and because of smoothing the texture (currently with mipmapping) i get a smooth shadow appearance.
But i have problems getting the info into the alpha channel and blend the lighted scene with the destination alpha.
But that should be solved next week (i have to play ultimate the whole weekend)

That what i could see (the rendering black rectangle where shadow should be variant) wasn’t that promissing when going for reality quality cause you don’t have the effect that the shadows get softener as the distance between caster and reciever gets larger.
But it looks much better then hard shadows and the performance decrease is very low (cause you can keep the pbuffer smaller then your rendering buffer it is only about 10% to 20% slower). But you get artifacts when moving thru biasing the texture lod.
I think this could be solved by smoothing with an texture shader or the cpu which should result in better smoothing and no need for texture lod biasing.

So what is with your ideas ?
Some info would be interesting even if they are not finished. What problems did you solve and which do you still have ?
Is Carmack beaten ?

Lars

edit: some spelling corrections

[This message has been edited by Lars (edited 09-12-2002).]

Well, no reply, because i haven’t finished working on it yet. But you guessed it, i’m sure :slight_smile:

So far, i’m still working on my stencil shadows implementation. I’m trying to fix the last bugs, and i implemented per-pixel lighting with bumpmapping. A friend of mine is modelling a 15k tri scene which will hopefully look better than most commercial games. He still has the textures to do, but most of the modelling is done, and let me tell you, it already looks awesome.

I’m gonna start working back on the soft-shadows implementation when the stencil shadows will be perfect, as i don’t want to mess up with 2 different problems at the same time.

Now the week-end is over i might as well explain how i implemented my soft shadows. I’m basically rendering the stencil to a 3D texture, then i generate each level by hardware-accelerated blurring of the texture many times. Usually, 3 or 4 levels are sufficient. After that, i can specify a per-pixel or per-vertex sharpness coefficient in the [0-1] range, which will be the 3rd texture coordinate ®. I project the vertices to the screen to get (s, t). I can then modulate my scene by the 3D texture.

The real trick is to find a good way to specify the sharpness coefficient. So far, i’ve only had success with the distance from the vertex to the light, fixed by a vertex to viewer coefficient, to not make the shadows in the distance blur too much. I’m still trying to find a better way to do it, but for that i’d need the per-vertex or per-pixel distance to the occluder, which i don’t have. Any ideas are welcome, but i’ll be sure to post the demo when it will be done, whatever the results are… just be patient :slight_smile:

Y.

I have maybe found a solution to the pixel-to-occluder problem.

I assume we are working with finite-radius point lights. We can define a “position” cube around the light center, the dimensions corresponding to the light radius. Given a vertex, it’s easy to transform the world-space position to that light-space [0-1]^3 position.

Before drawing the shadow volumes, render the scene a first time to a texture. Specify the light-space position as the color, so that it’s being interpolated over the screen. Now, for each texel of that texture, we’ve got its light-space position.

When drawing the shadow volume, we can do the same with the shadow quads, except that we specify the light-space position both at the front and at the end vertices.

This effectively means that it’s possible, using a projection now, to get for a single pixel 2 values: the light-space position of that pixel, and the light-space position of the occluder. Both are in the [0-1] range, so we can imagine to compute a form of distance calculation in a pixel shader. Which would in return give a good sharpness coefficient for the last pass…

I haven’t tested that yet, but it should work, shouldn’t it ?

Y.

[This message has been edited by Ysaneya (edited 09-13-2002).]

if i got you right, its actually what i proposed…

Don’t know, where did you propose that ? I haven’t found anything in that thread, except that idea about using a depth-map-like method. This is not what i’m proposing, because 1) i’m not sure it would work well with point lights and 2) you’re still limited by the resolution and bits precision of the depth map.

What i’m proposing to do is to calculate, for each pixel of the screen, the position of that pixel in world space, and the position of the nearest occluder in world space too. Then, you can find the distance between these 2 points, which is the distance to the occluder.

Y.

i was proposing that you can use the nearest occluder between point and light, and the distance to the light, and use the relation between them to find some value to soften depending on this…

i said that, with shadowmaps, thats much more straightforward to implement, but has nothing to do with the technique or proposal i did.

i as well talked about a way to implement that in a raytracer, with the very same method. so two different implementations, same proposal, same idea. and your idea is the same…

blurfactor = f(point-light,nearest_hit(point-light));

btw, you can as well store the surface normals and the depth in screenspace to help blurring into different directions with different strengths as well, hehe eliptic blur…

I’m no light expert, but why would you need the distance to the light when you got the distance to the occluder ? My basic idea was to have a term based on the occluder distance, which leads to ugly artifacts in the distance (ie. you’re blurring 8x8 or 16x16 pixels at 1 km); so it should be fixed by a term dependant of the distance to the viewer…

For now i got the tolight/toviewer terms working, but it’s per-vertex and it’s not as good as per-pixel tooccluder/toviewer terms. However, the performance drop is not that bad… i get a 20 to 30% slowdown in my scene so far. And i haven’t implemented vertex shaders yet.

Y.