Deferred lighting shadows.

Hey guys.
I’m trying to understand which type of shadows is best for a deferred lighting engine.
From what I see Shadow Maps seem to be the way to go but I’m not sure about this.
Could you guys give your opinion?

yea, shadow maps are by far the easiest method and it works perfectly with other screen based rendering methods.

I am currently trying for the first time to get shadows integrated with my deferred engine. The diferred engine is rendering fine, and I have several lights in the scene. One of these is the ‘sun’ and I am satisfied I have produced a depth texture (aka shadowmap) by rendering the scene from the sun’s point of view.
My problem is in the next stage- applying the shadow map.
I suspect the best place to do this in the deferred engine is when I process each light and accumulate the output results into a scene light accumulation FP16 buffer. When processing the sun directional light, I need to somehow ‘apply’ the depth comparision. This is the part I need help on.

Does anyone have some simple code or shaders which they could share - to show how this is done?

Many thanks.

how are G-buffer positions different than their forward rendered counterparts? A deferred renderer’s G-buffer is simply a cache of all the info needed to do lighting (and shadowing) for the (current) viewpoint.

I’m new to shadows so I don’t fully understand the process.
I think the process for deferred shadowmapping is different from forward shading only in the way the final composition is done - ie computing the texture coords for the shadowmap lookup and applying it during the lighting phase.

I think I’m 99% there as I have produced a test to projectively texture the scene using the sun colour texture (debug) from the sun FBO. To do this I read the G buffer texture(s) and extract the eyespace vertex position which is then multiplied by the texturematrix supplied as a unform to the light shader.
The texturematrix is computed as: scale_bias * light Proj * light CamView * scene_CamView_Inv.

In the shader, I lookup the shadowmap texture using:

vec4 shadowcolor = shadow2DProj(shadowmap,shadowCoord);
vec3 shade = diffuselight + revserselight; //colour if not in shadow
if ((shadowcolor.z + 0.005) > shadowCoord.z)
shade = vec3 (0.0);

However, the output after using the shadow map is all black.

Any ideas where I’m going wrong?

What you need to apply shadows from a shadow map are the following:

  1. a depth texture from the light’s perspective

  2. a matrix that converts from world space (or eye space or post-projection/NDC space) to shadow texture space (so, the transform you used when rendering objects into the depth texture)

  3. a world space (or eye space or post-projection/NDC space) position for the pixel currently being shaded

If you look at it like that, it doesn’t matter whether you are using deferred or forward rendering, you just need to have those three things. The difference between forward and deferred rendering is that you need to calculate the world space (or eye space or post-projection/NDC space) position from the scene depth buffer, or from your position texture, whichever you use.

So, for example…

varying vec2 screenTexcoord; // <0,0> in bottom left corner of screen, <1,1> in top right corner

sampler2D sceneDepth; // depth of scene, in the range [0,1]
sampler2D shadowMap; // depth from light

// post-projection to shadow map coordinate transform
// could calculate this as:
// eyeSpaceToShadowSpace * inverseProjection
// or
// worldSpaceToShadowSpace * inverseMVP
// “shadow space” is the transform (projection * modelview) you used while filling your shadow map
uniform mat4 NDCToShadow;

void main()

float depthNDC = texture2D(screenTexcoord, sceneDepth).x * 2.0 - 1.0;
vec4 posNDC = vec4(screenTexcoord.x, screenTexcoord.y, depthNDC, 1);

vec4 posShadow = NDCToShadow * posNDC; /= posShadow.w;

float shadowDepth = texture2D(posShadow.xy * .5 + .5, shadowMap) * 2.0 - 1.0;

if (shadowDepth < posShadow.z)
// fragment is shadowed

Thanks for the posting.

I have shaow mapping ‘working’ now (with aliasing due to z-fighting!).

I have a few questions: I see that you use Texture2D to sample the shadow map. GLSL has the Shadow2Dxxx functions to do that - so why use Texture2D ? What would be the difference ?
As I understand it, the Shadow2Dxxx returns a depth comparision between the 3rd Tex-coord and the value in the depth map and returns either 1.0 or 0.0. For this to work, the texture parameter GL_DEPTH_TEXTURE_FUNC/MODE can be set to GL_LESS,LEQUAL,GREATER,NONE,ALWAYS,COMPARE_R_TO_TEXTURE,etc.

Also, does the Shadow2Dxxx functions enable the h/w support for PCF on vNidia (when the texture uses GL_LINEAR filters) where as Texture2D will not?

I see from your code you like NCD space and all the way through you carefully convert all entities into NCD. (for exapmple even the shadow depth comparison is in NCD: shadowDepth < posShadow.z ) Any reasons for that or just convienient for your app?

You’re right, texture2D will simply return the depth value while shadow2D will perform a depth comparison also. Shadow2D is the right way to go if you want to use hardware PCF. There are some corner cases where only texture2D can be used - for example, if your shadow map is not actually using the GL_DEPTH_COMPONENT format - but hardware PCF won’t work in these cases either.

I used NDC space just out of convenience, it doesn’t really matter what you use as long as you’re able to compare the fragment’s distance to the light vs the light’s distance to nearest occluder using the same coordinate system. Typically that means you need to apply a matrix transform to your input position to get it into the same space that your shadow map is using.

Before programmable shading (and still common when using PCF shadows) I believe you’d normally set up TexGen to multiply the input vertex position by your shadow matrix and a scale & bias to get it into [0,1] texture coordinates, and the result would be fed directly into the shadow2DProj or equivalent. For deferred rendering you have to implement the G-Buffer->position->TexGen->shadow coord portion yourself.

There’s many ways to do shadow mapping, glad you got it working. You can try fixing the aliasing issue by adding a small bias to the z coordinate you provide to shadow2D, or by using PolygonOffset while creating your shadow map.

I’m trying to understand how this works but I have some problems.
I have my camera as a frustum and I don’t know the point it’s looking at.
How do I calculate lightViewMatrix and lightProjectionMatrix knowing the light position, the camera position, and the frustum of the camera (I also have it’s rotation and fov).
I want to apply this for point lights. I read somewhere that for point lights cube textures are used but I have no ideea how to use those. I guess I can do it without to start with and then later on add cube textures right?

Constructing a matrix for a light is no different than doing so for a camera.

Can’t speak to the cube-light question, except to say that there are alternatives to cubemaps.

Yes I know that it’s the same thing. But how do I calculate it so that it looks at the same point as the camera is?

Thanks AlexN - you have confirmed what I thought I knew. There is so much stuff on shadowing it’s confusing.

I have one more question. My shadows are being projected by the ‘sun’ on to the terrain. Objects (eg tank) is casting a shadow - but at extreem sun angles (eg sun rise/set) the shadow cast is so long it stretched right into the far view plane. How do i make the shadows more realistic by stopping them from stretching too far into the distance?

Ehm… please? Could onyone help me with this? Tomorrow is the last day I can work on this. After that I have to present my project at a contest.

what contest? what’s the first prize? You might induce someone to partner with you, if you agree to share the spoils.

It’s only a national contest (and my country, Romania is not that big).
Anyway for omnidirectional lights I’ve read you must use cube mapping.

Sorry, I don’t have a good solution for this. Extreme angles are always an issue for shadow map quality.

You could limit the maximum angle of the shadows. Or, you could implement some form of soft shadows, such as fading out the shadow effect based on distance from occluder (the difference between fragment and shadow map depth), or variance shadow maps. There will likely be artifacts whichever route you take, but some solutions may be more acceptable than others :slight_smile:

You can use a cube map, or dual paraboloid maps, or even some sort of 360 degree fisheye mapping. Cube maps are the easiest to understand, I think. You create a cube shadow map the same way you’d create a 2D shadow map for a projected point light, except that you actually are creating 6 2D shadow maps, each one with a field of view of 90 degrees and rotated to face down the axis of a cube map face.

Alternatively you can use stencil shadow volumes.

Edit: Cube shadow maps might not work with the fixed function pipeline, by the way. I’ve always used shaders to do a manual depth comparison.