Unreal Engine 3 Shadow algorithm

Hello!

A while ago I saw a short video about unreal engine 3 where they talk a little bit about the technologies they use. One of them are soft shadows using shadow cube maps. In this video they say that they use two shadow cube maps, one unchanged and the other blurred and they interpolate between them based on the distance from the pixel to the shadow-caster.
Does anyone know how exactly this algorithm works?

Thanks a lot

Some points to consider:

  • depth cube maps are not supported on current gen video cards, but i think there was some rumors (or was it more than rumors?) that the 6800 supports them. Maybe Unreal3 uses them (if so, does anybody know if hardware PCF is done on the cube map like NVidia does for 2D textures?), maybe not. For backwards compatibility it’s also possible that they encode the depth in a pixel shader, either directly in floats, or packed as RGBA.

  • why do they need TWO different cube maps ? I could imagine this: a sharpness factor, depending on the distance to the occluder (but are you sure it’s not the distance to the light? how do you compute distance to the occluder per pixel?). The SAME depth cube map bound to two texture units. The pixel shader just samples them, but disables PCF on the first one, and enables PCF on the second one (whether it’s done in hardware, or manually in the pixel shader is another thing). The sharpness factor is used as a blending coefficient between these two cube maps, but these are basically the same one bound to two TMUs.

Y.

As far as I understood that, they are just faking the “soft-shadows” of the lightsource (the lantern with stained glass) itself. That’s totally static, and is independent of the dynamic shadowing that is used for geometry.

I wrote a little app to demonstrate that and to try it out myself, in collaboration with an artist who made an animated character that is holding and swinging a lantern and another dude who provided us with proper cubemaps.
http://home.tiscali.de/der_ton/cubemapdemo.rar (3 MB)

screenshot (looks kinda boring without animation)

You can define the near and far radius for the cubemapped lightsource in the scene.txt file. You can also detach the cubemapped lightsource from the model’s bone and move it around freely, to better see the effect of the nonblurred/blurred cubemap interpolation.
The realtime part of the shadowing is done with stencil shadows.

Originally posted by Ysaneya:
[b]Some points to consider:

  • depth cube maps are not supported on current gen video cards, but i think there was some rumors (or was it more than rumors?) that the 6800 supports them. Maybe Unreal3 uses them (if so, does anybody know if hardware PCF is done on the cube map like NVidia does for 2D textures?), maybe not. For backwards compatibility it’s also possible that they encode the depth in a pixel shader, either directly in floats, or packed as RGBA.[/b]
    I know that shadow cube maps aren’t supported right now but you can still use pixel shader and floating point textures for that. I wrote a little app using “texture cube maps” using OpenGL shading language and normal RGB8 textures.

Originally posted by Ysaneya:
[b]

  • why do they need TWO different cube maps ? I could imagine this: a sharpness factor, depending on the distance to the occluder (but are you sure it’s not the distance to the light? how do you compute distance to the occluder per pixel?). The SAME depth cube map bound to two texture units. The pixel shader just samples them, but disables PCF on the first one, and enables PCF on the second one (whether it’s done in hardware, or manually in the pixel shader is another thing). The sharpness factor is used as a blending coefficient between these two cube maps, but these are basically the same one bound to two TMUs.

Y.[/b]
I don’t know if they use the distance between the pixel and the light or the distance between the pixel and the occluder but both is possible. the distance form the pixel to the light can easily be calculated and the distance from the occluder to the light is stored in the shadow cube map. And I think it’s more realistic to use the distance from the pixel to the occluder.
The only thing why I use 2 different cube maps is that you can achieve soft shadows with them. I think the idea is to render one depth cube map and then blur this map.

Originally posted by der_ton:
[b]As far as I understood that, they are just faking the “soft-shadows” of the lightsource (the lantern with stained glass) itself. That’s totally static, and is independent of the dynamic shadowing that is used for geometry.

I wrote a little app to demonstrate that and to try it out myself, in collaboration with an artist who made an animated character that is holding and swinging a lantern and another dude who provided us with proper cubemaps.
http://home.tiscali.de/der_ton/cubemapdemo.rar (3 MB)

screenshot (looks kinda boring without animation)

You can define the near and far radius for the cubemapped lightsource in the scene.txt file. You can also detach the cubemapped lightsource from the model’s bone and move it around freely, to better see the effect of the nonblurred/blurred cubemap interpolation.
The realtime part of the shadowing is done with stencil shadows.[/b]
Wow, that’s a really great demo! As far I see I think you do some fake-soft shadwing here too. How did you do that?

Wow, that’s a really great demo! As far I see I think you do some fake-soft shadwing here too. How did you do that?
In essence this is an old and simple technique, projected textures - just applied to cubemaps.
The problem with those “shadows” is that only the latern itself casts shadows, not objects lit by the lantern.

Originally posted by Corrail:
The only thing why I use 2 different cube maps is that you can achieve soft shadows with them. I think the idea is to render one depth cube map and then blur this map.

It doesn’t make sense to me. After doing the shadow comparison with your cube map, you get a boolean result. In which buffer do you render this to ? If you render it to a 2D texture which is the size of the screen and blur it, you’ll get some artifacts, and from the sound of it it’s not what Unreal3 is doing. Then do you render it to… another cube map, which you then project back onto your geometry ? But then you’ll loose some performance blurring stuff you don’t even see.

Y.

Originally posted by Ysaneya:
[b]It doesn’t make sense to me. After doing the shadow comparison with your cube map, you get a boolean result. In which buffer do you render this to ? If you render it to a 2D texture which is the size of the screen and blur it, you’ll get some artifacts, and from the sound of it it’s not what Unreal3 is doing. Then do you render it to… another cube map, which you then project back onto your geometry ? But then you’ll loose some performance blurring stuff you don’t even see.

Y.[/b]
That is my question here. According to that video from Epic Games they do something like that. I’ve got just a very rough idea but don’t know something detail.

Originally posted by skynet:
[quote] Wow, that’s a really great demo! As far I see I think you do some fake-soft shadwing here too. How did you do that?
In essence this is an old and simple technique, projected textures - just applied to cubemaps.
The problem with those “shadows” is that only the latern itself casts shadows, not objects lit by the lantern.
[/QUOTE]Apart from the interpolated cubemap light pattern that fakes soft shadows of the lantern, there’s no faked softshadowing going on (and no real softshadowing either, ofcourse :wink: ).
Skynet is right, it’s nothing new or revolutionary, and it’s technically simple. But in combination with the usual stencil shadows it just looks quite effective. I would assume that that’s what they demonstrated in the UE3 video, and decorated it with marketing talk of softshadowing. That’s just an assumption ofcourse, I hope it’ll turn out I was wrong and they are working on something more sophisticated, like Ysaneya and Corrail are discussing.

After reading up on the technique I’m fairly certain it’s similar to Humus’ ‘shadows that don’t suck’. When using a linear distance in the shadow maps the soft edge can easily be calculated in the fragment based on the occluder distance and the current distance. More importantly the method is compatible with much older hardware, although the soft edges wouldn’t work.

I spoke of this with Sir Tim Sweeny during 6800 leagues under the sea, and they’re doing nothing more that the demo posted above.

I’ll have a try to make things clear :
Consider a position in space (that’s your point light), and consider two cube maps. The first cube map is a “lightmap” (totally static), that represents all the light getting out of the point light. Again, this is static, so it works well with lanterns as instance (as long as it doesn’t try to illuminate itself with these cubemaps). The second cube map is a blurred version of the first one. At each fragment affected by the light, you cast a ray in both cubemaps to determine how it should be lit (projective mapping), and you lerp depending on the distance between the current fragment and the point in space.

Clear enough ?

Epic uses another of these cube lightmaps with their “moving character in front of a psychedelic point light in a dark corridor” demo.

SeskaPeel.

Ok. And we were all fantasizing on something that was not that advanced… say thanks to marketing hype. At least it made us think of new algorithms :slight_smile:

Y.

Originally posted by SeskaPeel:
[b]I spoke of this with Sir Tim Sweeny during 6800 leagues under the sea, and they’re doing nothing more that the demo posted above.

I’ll have a try to make things clear :
Consider a position in space (that’s your point light), and consider two cube maps. The first cube map is a “lightmap” (totally static), that represents all the light getting out of the point light. Again, this is static, so it works well with lanterns as instance (as long as it doesn’t try to illuminate itself with these cubemaps). The second cube map is a blurred version of the first one. At each fragment affected by the light, you cast a ray in both cubemaps to determine how it should be lit (projective mapping), and you lerp depending on the distance between the current fragment and the point in space.

Clear enough ?

Epic uses another of these cube lightmaps with their “moving character in front of a psychedelic point light in a dark corridor” demo.

SeskaPeel.[/b]
That’s it? Wow, I can only aggree with Ysaneya! :wink:
Thanks!

Originally posted by Ysaneya:
[b]Ok. And we were all fantasizing on something that was not that advanced… say thanks to marketing hype. At least it made us think of new algorithms :slight_smile:

Y.[/b]
What makes me even more angry, is that they use mogumbo’s parralax mapping everywhere, but they did not like the name, so they prefered renaming it to “virtual displacement mapping”, without giving any credit to anyone.

ID Software had to bend in front of Billodeau’s patent, does it mean you have to be rich to be given credit ? And I’m not speaking of retribution, just only plain and simple credit.

As CEO of a company, I’m wondering what will be my interest in this world to publish my tips and tricks. I don’t want to earn money with these, I want to be known as the company that thought of these methods, and then, begin being trusted by a bunch of people, and then maybe sign agreements and be paid to work for them.

You know what ? I’ll still do it, because if we want things getting to move, we must all make some sacrifices.

SeskaPeel.

If it’s Epic, I think it’s a safe bet that 99% of this new technology is bull****, and won’t make it into the actual game.