Shadows map implementation for directional lights


I am in the process of implementing shadow maps but I have some questions for those more experienced in this.

When I create light view matrix I am using the equivalent of the gluLookAt (because this is what the general populous seem to do). GluLookAt takes an eye position, but what is the eye position of a directional light? If I had a spot or point light this would be obvious but not so for directional lights. The eye position of the light changes the values in the depth map I create.

The projection matrix for a directional light is orthographic, I am experienced in creating orthographic matrix for ui drawing but no so for light. Should the near far be the same as my normal perspective matrix? What are good parameters for the left right top bottom? People put examples up but they never really explain why they choose the values that did.

Decompose the matrix into translation+scale and rotation.

The rotation matrix can be found from the light direction and up vectors as:

vec3 Z = normalize(light);
vec3 X = normalize(cross(Z,up));
vec3 Y = cross(X, Z);
mat3 R = mat3(X, Y, Z);

The translation and scale matrices are whatever fits the rotation of the visible portion of the model to the unit cube. For small areas, you can just fit the entire model (or, if the light direction can change, the model’s bounding sphere), which avoids needing to re-render the shadow map each frame.

So find a convex region which encloses the intersection of the view frustum and the geometry to be shadowed. A bounding box or sphere will work, but will be over-generous near the viewpoint. The entire view frustum will work, but you’ll have to avoid making the far plane too far (for normal rendering, making the far distance ten times larger than necessary doesn’t really matter; for shadow rendering, it will waste most of the shadow map on empty space).

Then transform the bounding region by the light’s rotation matrix. Find the minimum and maximum values for each coordinate, and pass these to e.g. glm::ortho() to get the translation+scale matrix S. The resulting projection matrix is S*R.

An optional intermediate step is to find a 2-D rotation which aligns the bounding box for an optimal fit, rather than relying upon the up vector. For each edge, find the “width” of the bounding region perpendicular to that edge (i.e. the distance of the point farthest from the edge in the direction of the edge’s normal). The rotation should be chosen so that the edge which yields the narrowest width is one of the axes. This avoids the situation with the rendered geometry running along the diagonal with the off-diagonal areas wasted.


Thanks for the detailed reply, I will take a look at the approach you have given. What you say makes a lot of sense.