Shadow Mapping


Most OpenGL shadow mapping techniques are currently realized using a 2-pass-approach using multi-texturing.

1: construct light view depth map
2: render scene and access to depth texure using projective texture mapping

In contrast DirectX supports an object ID technique that writes IDs instead of pixel color into the target texture. In a second pass shadowing is determined by comparing object IDs instead of depth values.

Is it feasible to realize this with OpenGL? How about performance ? (Pass 1 might be slower, but…) Has anyone implemented it ? How should IDs be generated/stored ? glColor(RGBA(id++)) ? Will the visual quality comparabel to 8/16-bit depth buffer solitions?


Maybe you should take a look at LordKronos’ article :

which discusses index shadow mapping.

The papers are cool,
Any other cool links?

This is looking like something I’m working on

It’s possible.
Avoid using Zbuffers reads because not all hardware is capable of letting you read their Zbuffer and this is painfully slow on some hardware…

So basically set OpenGL to default mode, render your scene with object ID as color, read back that frame buffer (safe technic that will work on all/any card)

Object ID should be recorded at the time you render them (so design your object with memeber var ID).

Don’t know how you’ll shadow with this information though…

If you have more courage than I have, you could take a look at the sources given at :

the comments are in Japanese. I believe the index shadow mapping are done without register combiners (but with tex_env_combine) ; perhaps is it more portable (?)…

In doing my research on index shadow mapping (the article Nicolas mentioned above), I really did find that it wasnt the best solutions most of the time. You cant use per-polygon indices or else you tend to get a lot of shadowing errors at each polygon boundary. If you use per object indices (the technique I believe IShadowMap uses), you fix this problem but you lose the ability for an object to cast shadow on itself.

I really think depth shadows are a better choice. The problem with depth shadows is that you typically have to read from the depth buffer, which is likely problematic on many cards (dont know personally, havent tried it on anything but a GeForce). In my second lighting and shadowing article, I demonstrated an alternative that uses register combiners. However, if you need a technique that doesnt read the depth buffer and doesnt require register combiners (something more universally supported), try toying around with texgen. If I recall correctly, the depth map sample on nVidia’s site uses the depth buffer to build the depth map, but in the second step (projecting the shadow map onto the scene) it uses texgen to calculate the reference value to compare to the depth map. Seems to me it would be possible to use the texgen when building the depth map also. That might be a bit more usable.

Also, I think the IShadowMap demo requires an alpha buffer to work (dont know for sure, havent had a change to sort through the source). As far as I know, alpha buffers arent such a universally supported feature. Last time I actually checked, only nVidia supported this. Maybe thats changed, but its something to be aware of.

Thanks for your answers. I have read all your references and understood why depth maps have certain advantages, especially because they do not have discretisation artefacts at adjacent polygon borders. Perhaps someone finds a clever way to eliminate the ugly border effects, possibly using a second index map by comparing these maps somehow via register combiners.

Unfortunately, there is no direct OpenGL-path from the depth buffer to the texture map (unless you have an high-end Onyx at hand). I am targeting my own application to the Nvidia PC platform (using register combiners or possibly the awaited NV_fragment_shaders similar to DirectX8’s pixel shaders). The nice Nvidia shadowmap demo uses 16bit of depth precision, still has precision problems compared to real 24/32 bit depth maps (or indices) if you have real world geometry at hand. Perhaps a linear z-mapping would help. Simply extending their implementation to 24 bits would be possible but performance would be a problem (even going from 8 to 16 bits needs an extra render path). Computing the depth map via texgen is a great idea (but so far I didn’t manage to implement it).

ishadowmap uses the alpha channel an thus can only manage up to 256 different objects on typical systems.

Still collecting more ideas…