Do we need spotlights anymore ?

I was just wondering about something …
Does Doom3 use spotlights ? (i mean with a specific code path dedicated to spotlights)
I think its useless, all you need is a point light, and some geometry to project the needed shadows (actually, not to project light)… its even better as you can deal with arbitrary shaped spotlights …
And that sounds more like a “unified model”.
Dont you guys think so ?

All OpenGL lighting is per vertex. If spotlights were per pixel they’d be more useful, (although they still need an additional parameter to soften the edge).
There’s a difference between what you can do in code and what a user or artist can use to describe lights. Spotlights have been very popular and useful in describing scene lighting, but they could be implemented as a subset of other general approaches. Nobody used the basic OpenGL lighting path for anything interesting these days. Almost anything fancy, from bump mapping to per pixel lighting to shadows requires hand coding your lighting and pretty much ignoring what OpenGL provides for lights. This is deliberate. Instead of providing fragment lighting support in functions (Which SGI had a spec for and it could be implemented today) implementors have provided a toolbox to let you construct your own lighting. It’s much more difficult but it’s proven popular and is infinitely more flexible.

If anyone’s interested, these are the SGI fragment lighting extensions. I wrote some code that used these, it was pretty easy to implement bumpmapped fragment lighting without any hand coding of your whole lighting equation.

Not sure about the peturb normal one, I think it may be rolled into the light texture extension. I don’t see the extra cutoff parameter to soften the spotlight in fragment lighting and can’t find the extension for it.

[This message has been edited by dorbie (edited 06-30-2002).]

Unified means that the lights interact in the same way with all surfaces in the world ( static world and dynamic characters - it’s all computed the same way ). That’s all Carmack has said. Adding different types of light doesn’t mean that lighting isn’t unified. It’s all in the MacWorld video.

Well, actually i wasnt talking about OpenGL lighting, but all those technics that use cubemaps or projected textures to simulate spotlights.
I just wanted to know if that is still usefull when your shadowing system can do the whole thing for you…
The only problem i can see is the hardness of the shadows that may ruin your spotlight effect … ?

Unified means different things to different people. I think what Carmack means when he says unified is he has a single lighting equation for a surface which all lights use accurately, the results are summed in the framebuffer. Shadows are probably optional but work correctly when enabled for any given light source. So I think I’m with you on the unified interpretation.

[This message has been edited by dorbie (edited 06-30-2002).]

Dorbie, yes we agree .

And I do think DOOM 3 has spotlights. It’s merely an extra term calculated per-pixel. The projected…ahem…shadow/light texture is modulated by this term. Actually, I’m sure DOOM 3 just implements the GL lighting model per-pixel ( with a few tweaks ). What I would like to know, is what Carmack does to avoid clamping when summing lights - any ideas ?

PS: And there are definitely shadows from multiple lights in the scene with the Imp and fireball .

PH : you mean the only thing that changes is the attenuation map texture, while the equation remains the same ? You could for example bind a texture3d for a point light, and a cubemap for a spotlight (i think you see what i mean), without really changing your vp.
well, I guess this would work … and still, you can call it ‘unified’

Clamping is not a problem when you are accumulating results in the framebuffer, it can look OK, just watch your totals if it’s an issue. Clamping IS a problem when it happens between two passes for example in an emboss bump map where the tops of your bumps can get flattened, or individual terms get clamped before use.

I strongly suspect Carmack uses destination alpha to store shadow_texture(?)attenuationcutoff (pick your own combination of attenuators) between passes on systems with limited texture units. So, this could get clamped, but they modulate each other then the light on the second pass so it’s not an issue.

The lighting results just accumulate in the destination color, if they clamp they clamp, it will probably give them a realistic looking overexposed look unless there’s a huge chroma shift, and even then it’ll probably look great.

Yes, the lighting equation is identical. It’s lighting_result * (attenuation * spotlight cutoff * shadow * whatever else I might modulate the light with). Oh and it works for color too.

[This message has been edited by dorbie (edited 06-30-2002).]

Ok, i get your point, but i cant figure what is your “* shadow”. I though shadow is when the stencil count != 0 (or whatever init value) ?
Or maybe we’re not talking about the same thing ?

There was an interview with Carmack where he said he did some pre-modulation and post-scaling to support an arbitrary number of lights per surface ( at the expense of some precision ).

For example, if I accumulate a series of bright lights, sum( Li ) will eventually exceed 1.0. This is the clamping I was talking about. I’ve been looking at ways of extending the range ( dynamically ? ) at the expense of precision. Maybe combine it with the base texture color to simplify/optimize shadows from multiple lights on shiny surfaces ).


PH, darn I just deleted the video. I totally expect multiple shadowed lights, I just haven’t seen them in the shots. Any engine accumulating lights that can do shadows for one light should be able to just enable it for all lights, it’d just clear the stencil buffer and have at it with stencil volumes etc for the next light. There are engines now which do stencil shadows or projected shadows of another kind, but it’s just a nasty hack. Again this is another aspect of a unified lighting model. The key is that the fragment color added to the framebuffer represents everything about the fragment lighting result for a single light, (of course the stencil test culls the fragment rather than modulating it to zero but the net effect is the same).

[This message has been edited by dorbie (edited 06-30-2002).]

There’s a scene in the leaked video where the multiple shadows is very visible. The player hides and a monster walks by, there are two huge shadows there.

Oh, and implementing this for surfaces with colored specular highlights requires rendering to a texture. I had a few posts on this a long time ago and I settled with using a doubled buffered p-buffer ( I’ll revisit that work when time permits as it might not be the most optimal ).


Hmmm… I can see that with a fragment light result in a nominal range and the attenuation maps which clamp it would never blow out. This would mean that you want to keep the attenuation maps a bit darker than normal, and boost the result during attenuation modulation. So the clamping he’s avoiding is in on an individual light attenuation to allow a light to blow out a surface and look bright, not to avoid all clamping in the framebuffer. He’s trying to avoid clamping in the attenuation * cutoff * shadow(?) before he multiplies by fragment light result. To do this it costs him some precision in the attenuation modulation because it’s ramped over fewer bits and multiplied later.

It’s not the final result he doesn’t want to clamp, he’s actually trying to make it >1 in places it’s the intermediate modulation he needs to stop clamping before it multiplies the result and he can then boost the overall effect. As you ass lights to the framebuffer they will clamp.

Beyond this he MIGHT manage overall levels in the scene, but not to save every destination fragment from a clamp.

Anyhoo, that would be my interpretation. Clamping is not always bad, and can look quite good at times.

[This message has been edited by dorbie (edited 06-30-2002).]

PH, if you look at the massive doom3 thread you’ll see I predict that on cards with limited texture units some shadow effects will be monochrome. However I don’t think specuar highlights are a unique case. Their color can be modulated by the same color as the diffuse term for a point source, it’s just that when you only have one component in alpha…

Do the math, L is the same for all terms, and it’s L that is attenuated and applies the color, as opposed a diffuse map which isn’t applied to specular.

[This message has been edited by dorbie (edited 06-30-2002).]

Originally posted by PH:
Oh, and implementing this for surfaces with colored specular highlights requires rendering to a texture. I had a few posts on this a long time ago and I settled with using a doubled buffered p-buffer ( I’ll revisit that work when time permits as it might not be the most optimal ).

I don’t really see why you would need the render to texture. An older tutorial:

has basicly the entire light equation and can be done on a gf2 without a render to texture. The color of the spec comes from the light color. Maybe I’m misunderstanding you.

[This message has been edited by zeroprey (edited 07-01-2002).]

Trust me , specular lighting is not that simple when combined with shadow volumes. If you try and implement a completely general system, you’ll see what I mean. It’s exactly the same with the interpolation problems with specular lighting - when I brought this issue up ( a post a long time ago ), only Cass Everitt immediately knew what I meant. In short: there are lots of subtle issues with lighting/shadowing, that you need to experience first-hand to accept.

so lets all move to ptm. just write out the whole lightingequation and tailorize it, put this into a texture and done…