A while ago I saw a short video about unreal engine 3 where they talk a little bit about the technologies they use. One of them are soft shadows using shadow cube maps. In this video they say that they use two shadow cube maps, one unchanged and the other blurred and they interpolate between them based on the distance from the pixel to the shadow-caster.
Does anyone know how exactly this algorithm works?
Thanks a lot
As far as I understood that, they are just faking the “soft-shadows” of the lightsource (the lantern with stained glass) itself. That’s totally static, and is independent of the dynamic shadowing that is used for geometry.
I wrote a little app to demonstrate that and to try it out myself, in collaboration with an artist who made an animated character that is holding and swinging a lantern and another dude who provided us with proper cubemaps.
http://home.tiscali.de/der_ton/cubemapdemo.rar (3 MB)
screenshot (looks kinda boring without animation)
You can define the near and far radius for the cubemapped lightsource in the scene.txt file. You can also detach the cubemapped lightsource from the model’s bone and move it around freely, to better see the effect of the nonblurred/blurred cubemap interpolation.
The realtime part of the shadowing is done with stencil shadows.
After reading up on the technique I’m fairly certain it’s similar to Humus’ ‘shadows that don’t suck’. When using a linear distance in the shadow maps the soft edge can easily be calculated in the fragment based on the occluder distance and the current distance. More importantly the method is compatible with much older hardware, although the soft edges wouldn’t work.
I spoke of this with Sir Tim Sweeny during 6800 leagues under the sea, and they’re doing nothing more that the demo posted above.
I’ll have a try to make things clear :
Consider a position in space (that’s your point light), and consider two cube maps. The first cube map is a “lightmap” (totally static), that represents all the light getting out of the point light. Again, this is static, so it works well with lanterns as instance (as long as it doesn’t try to illuminate itself with these cubemaps). The second cube map is a blurred version of the first one. At each fragment affected by the light, you cast a ray in both cubemaps to determine how it should be lit (projective mapping), and you lerp depending on the distance between the current fragment and the point in space.
Clear enough ?
Epic uses another of these cube lightmaps with their “moving character in front of a psychedelic point light in a dark corridor” demo.
Ok. And we were all fantasizing on something that was not that advanced… say thanks to marketing hype. At least it made us think of new algorithms
Originally posted by Ysaneya:
[b]Ok. And we were all fantasizing on something that was not that advanced… say thanks to marketing hype. At least it made us think of new algorithms
What makes me even more angry, is that they use mogumbo’s parralax mapping everywhere, but they did not like the name, so they prefered renaming it to “virtual displacement mapping”, without giving any credit to anyone.
ID Software had to bend in front of Billodeau’s patent, does it mean you have to be rich to be given credit ? And I’m not speaking of retribution, just only plain and simple credit.
As CEO of a company, I’m wondering what will be my interest in this world to publish my tips and tricks. I don’t want to earn money with these, I want to be known as the company that thought of these methods, and then, begin being trusted by a bunch of people, and then maybe sign agreements and be paid to work for them.
You know what ? I’ll still do it, because if we want things getting to move, we must all make some sacrifices.
If it’s Epic, I think it’s a safe bet that 99% of this new technology is bull****, and won’t make it into the actual game.