Ive been watching the Doom 3 footage again, and was wondering. Why cant lightmapping be done like shadow mapping.
Ive seen some shadow mapping done by taking a light building a viewing volume from that lights possition, rending the scene in that volume, then taking its depth buffer and checking it agianst the scene drawn from the players perspective z buffer. The pixels that fail the test (something blocking the light) get shaded, and so on for each light.
Could dynamic light mapping be done this way also? I mean, could you do the exact same thing while your doing the shadow mapping, simply for the pixels that dont fail the test, get lit, depending on the distance from the light, for that pixel? Only problem would be finding the distance. Does this make sence? Also how the hell does Doom 3 make those realy cool dynamic lighting effects, since they have compleatly removed the light maps all togeather?
All of the lights in doom 3 are projective. There is a demo and some info about this called “projective texture mapping” on nvidia’s developer site. For example, where you see a light casting down from a fan on the ceiling and you see the soft shadow of the fan blades, they have this texture that has an image (in black which fades to white) that looks like the fan blades. When the light is projected onto the scene, this fan blade texture is modulated with the projected light (and rotated if needed).
Ok, i have been reading about this projective texture mapping, its pretty interesting, and very much like shadow mapping in most ways. I wonder why this style of lighting isnt used more often?
I still wonder though, would it be better to do this with another method attached to it that also lights pixels up that are in the lights view, brighter the closer they are? Like a dynamic light and shadow map. Mix this with a normalmap style bump mapping, and it could be a very realistic look. What ya think?