2D point light shadow mapping

I’m trying to wrap my head around how I would implement 2D shadow mapping for a series of lights.

The tutorial I’m following is this one, although it applies to 3D directional lights, so I’m just looking to it for general guidance:

For simplicity what I have at the moment is a series of vertices inside a buffer object, which I’m drawing as lines with one program.

I’m now looking to generate shadow maps using depth textures, which I can then pass to my final shader program to work out what’s in or out of the shadow and by how much.

I have a few assumptions about what I’d need to do to apply this to 2D, but I’d prefer to just run these by someone:

  • I’ll need to redraw the scene from the perspective of each point light to create one depth texture per light.
  • These depth textures will be something like 1024x1 in resolution, or potentially just a 1D texture if that’s possible.
  • Each draw to these depth textures should use a perspective projection matrix.
  • Once all depth textures are drawn they’ll be passed into another shader program as an array of textures, and these will inform whether a particular fragment is in shadow.
  • This other program will effectively just be drawing a screen sized quad using an orthographic projection, passing over every fragment, and computing shadow / lighting intensities for an untextured background.
  • I’m assuming that the best approach to using the depth textures is to derive the position in the depth texture from the angle between the current fragment and the light in question, then checking whether this corresponding value in the depth texture is nearer or further than the current fragment to find whether it’s in shadow.

Do these sound like reasonable assumptions to make?

Thanks
Paul

It’s possible. glFramebufferTexture1D and glFramebufferTexture allow you to use a 1D texture.

If a point light illuminates in all directions (i.e. not a spotlight with a limited cone of illumination), you really need to draw the scene 3 or 4 times to generate a panorama (in 3D, you’d do it 6 times to create a cube map).

You’d transform each vertex using both the orthographic (plan view) projection (for gl_Position) and the projection(s) used to generate the depth map(s). So the fragment shader will get the position in depth map space for each depth map, and the z/w coordinate is tested against the depth value for the x/w coordinate. IOW, just like 3D shadow mapping but without a Y coordinate.

In the case where you have multiple depth maps forming a panorama, you’d have to decide which one to use. In 3D, you can use a cube map via samplerCubeShadow which selects the face automatically, but “square maps” don’t exist and using a cube map would be exceedingly wasteful (6 1024x1024 faces when you only need 3 or 4 1024x1 faces).

1 Like

Thanks GCElements for your comprehensive reply.

By the sounds of it, I will probably try using a perspective projection matrix, and 4 model view matrices per light, facing each square face to create 4 cones of illumination.

Representing this in my final shader program would probably be more convenient with a 2D array of 1D textures per light.

I’ll also do some digging into how samplerCubeShadow decides which face to use, and hopefully find an equivalent for square maps.

Thanks again for your reply.

Cube map lookup works by determining which of the three components has the largest magnitude and it’s sign. The other two components are then divided by the largest to get a value in the range [-1,1]. For a square map you only have two components.

If you use axis-aligned square maps, you can skip transforming the vertices to light space, and just subtract the light position from the fragment position then perform the lookup as described above. You’ll need to replicate the perspective projection for the Z component to get a value which matches what’s stored in the depth map, but you can re-use the division from the square map lookup (both need the reciprocal of the largest component).