Shadow mapping

Hi all!
I was reading lots of tutorials about shadow mapping, but mostly all of them say the same things and explain this technique identicaly.

So, the algorithm consists (as I understood) of two steps:

  1. Render scene from the point of view of the light source and fill up specially generated texture with depth values.
  2. Render scene with usage of this map.

And the thing, that I can’t figure out is the second step: usage of shadow map. I can’t understand how can I use this map while rendering my scene from the POV of the camera. Like, I have a depth values, stored in the texture, but how can I use them during the second render?

Can anyone give some links with a bit more detailed eplanation or something? Thanks in advance!

The second pass “renders” the scene from both points of view.

The vertex shader needs the matrices for both viewpoints (camera and light), and transforms the vertices into both coordinate systems (gl_Position gets the camera-space coordinates, a user-defined variable gets the light-space coordinates).

The fragment shader uses the x,y coordinates of the light-space position to obtain a value from the depth texture, then compares that to the z coordinate of the light-space position. If the light-space depth is less than or equal to the value from the texture, the fragment is illuminated, otherwise it’s in shadow.

Things to bear in mind:

  1. The normalised device coordinates obtained from the light-space transformation will be in the range -1…+1, while texture coordinates are 0…1 and depth values are 0…1, so you need an additional x->(x+1)/2 transformation for each coordinate.

  2. You can access the depth texture either as a normal 2D texture using a sampler2D uniform, and perform the comparison yourself, or as a shadow map using a sampler2DShadow uniform, where the comparison is performed automatically.

In the latter case, the texture is accessed using a 3D texture coordinate where the z coordinate is the reference depth. The comparison needs to be enabled by with glTexEnv(GL_TEXTURE_COMPARE_MODE, GL_COMPARE_REF_TO_TEXTURE); the comparison function can be set with glTexEnv(GL_TEXTURE_COMPARE_FUNC).

Shadows represent the absence of light. For our purpose, a shadow means that some object has interposed itself between the light source and the fragment being rendered. The goal of shadow mapping is to be able to say, for each fragment to be rendered, whether light is being blocked by something (even if that thing is part of the same mesh).

Since being in shadow means that no light from that light source reaches your current fragment, each fragment needs two things:

  1. Its current distance from the light source.

  2. The distance of the object nearest to that light source.

If #2 is greater than #1, then the nearest object to the light source is creating a shadow over that fragment. So you don’t add the contribution of that light source to the overall lighting effect.

#1 is easy enough to compute. For a directional light, you just compute the distance to the near plane, as viewed from the light source. For a point light, you compute the distance to the light position. All of these distances have to be taken using the correct spaces. So you’ll need to [transform the current fragment’s position into different spaces, or just passing the depth or position as a vertex shader output, in the correct space. But they’re all doable easily enough.

The whole purpose of shadow mapping is to provide a shader with access to value #2, which cannot be computed locally.

OK, so you have this shadow map that you rendered from the point of view of the light source. And you have a fragment in the scene. The question you’re probably asking is, “how do I find the right texture coordinate to fetch the correct distance value from?”

That’s really the easy part: texture projection.

Texture projection is a thing where you generate texture coordinates based on the position of vertices. You effectively transform those vertices via normal projection, only you transform them [url=http://alfonse.bitbucket.org/oldtut/Texturing/Tut17%20Projective%20Texture.html]into the space of the texture, rather than into the space of the screen](Compute eye space from window space - OpenGL Wiki). But it’s more or less the same math.

This is commonly used for flashlights and projected lights. But shadow mapping uses it also. For lights, you are using the texture to determine the color and intensity of a light source by projecting the texture across a scene. For shadows, you are using using the texture to determine the depth value nearest the camera for that particular position in the rendered scene.

So, actually, I just render my scene from point of view of the light source, like in case when my camera is situated on the position of light source (same perspective matrix, but view matrix should be with respect to my ligh source). So that, shadows won’t be distorted, right?

The two sets of matrices are independent.

A point light source should use a perspective projection, a directional light source (e.g. sunlight) should use a parallel projection. The bounds of the light’s projection should be chosen so that everything which is in view is within the frustum.

The light can use a single matrix which combines both the projection and view transformations (the main reason for having separate model-view and projection transformations is that some calculations, e.g. lighting, can’t easily be performed in a projective space, but this isn’t an issue for the shadow map).