Lightmaps Future

In FPS games, the player doesn’t go up to much. Perhaps he will go up to the 3 floor of a building.
Also, these lightvolumes are the reason why the game consumes something like 600 MB.
What is your technique?

Quake 3 used a “3d lightmap” for the first time, not HL2.

I’m curious, how would you do a “3D lightmap”? Assuming you are talking about a cubemap, you could render all shadow casters as black on the cubemap, and if your player or whatever was behind the shadow caster, it would look fine, but it would look wrong if the player moved in front of the shadow caster, because they would still be shaded.

I just do a raycast between the light source and the object, and limit the engine to only updating one light/model pair each frame.

A 3d lightmap is not a cubemap. It is really a volume texture, often the vertical axis is much less detailed though.

Head for the Quake 3 source if you want (old) implementation details.

Please note that this low-res 3d lightmap is only used on dynamic objects, ‘entities’ as they are called in q3 engine. Classic 2D lightmap stored on atlases are still used for shadowing static meshes.

For a full-res colored sphere of light, you would need 256x256x256 = 16 mb of texture memory, just for the light texture.

Maybe I will use this method with a 64x64x64 texture. It would be a lot faster than processing the correct 2D mapping coords for each face.

I think you must be talking about dynamic lights on the walls, like rockets and weapon flashes.

Well, a bit of history on quake 3 engine…

Originally posted by halo:
I think you must be talking about dynamic lights on the walls, like rockets and weapon flashes.
Not only these, but also to light other monsters, the player weapon, etc, according to the static lighting baked and stored in the q3map level. For each voxel (3d texel), is stored RGB ambient light, RGB directional light, and the direction of light in compressed lat/lon format, for a grand total of eigth bytes.
Each voxel is about the size of the player, so the total number of voxels is not so big. This 3d texture was only stored in the cpu memory, only lighting results were send to the video card.

Search ‘q3 light grid size’ if you want to dig more infos on the subject. And use the source, Luke.

zed, what’s the most poisonous spider you’ve seen in your house?

Oh, I see, so they use the “3D texture” as a lookup table and set up the player lighting using regular hardware lights, based on what the current voxel says to use.

So they must be pre-processing the voxels to eliminate those that are occluded from the light, right? I thought about doing something like this, but I thought they were using a complex pre-processed BSP shape. I didn’t realize they were just using simple boxes.

This would save a little time, because it wouldn’t require a raycast to tell whether a light can see the player, but I have it set up so the engine only updates one light/entity pair per frame, so I think that is good.

As for weapon flashes and other lights on the BSP walls, a 64x64x64 3D texture is probably the way to go.

What’s the future of the normal?

Leghorn: The normal vector is here to stay. Think about it; not only is it used for faces, but nowadays even for 2D textures (per-pixel on src 2D textures).

One thing to mention could be; most, if not all, programmers are tought “Keep your normal’s length 1.0”. It’s basically in 3D programming 101. What is lost in this is that when going “advanced” the normal, if not normalized (to 1.0 length), can be used to enhance scenes.

As for lightmaps, IMHO they will not die anytime soon, as they are the cheapest way to fake (as we are all just faking everything) some lit surfaces - especially surfaces not recieving dynamic shadows. Sure, we may precalc texture+lightmap on the CPU (possibly even using ray-tracing and radiosity to produce it) and upload, but the lightmap is still used.

Also, while I suspect many of us are die-hard gamers too (in addition to developers), OpenGL and 3D visualisation has many applications we often never think of. Perhaps the most obvious could be CGI/FX for movies driving shaders forwards, to large-scale architecture where geometry performance (initially) is king. For the latter case, there simply are no characters (NPC’s or not) casting shadows, and preprocessing time may be both seconds and minutes, but fly-by’s in such a scene must be interactive.

Lightmaps won’t die, I’m sure, but as all optimizations they must be applied where appropriate. I know of quite a few games today that (out of lazyness?) uses shaders (and thereby artificially limiting their target audience) where perhaps 90% of the uses are what I’d call “bad” and lightmaps would be more appropriate.

tamlin: I agree with what you said.
However, i have never seen a game using shaders (dynamic lighting?) where lightmaps would be better. Hm, maybe FEAR, though i haven’t played that and am not sure, if lightmaps would’ve been better. Any other games?

As far as i can tell, most games on the market are more conservative, than one would expect, certainly because although dynamic lighting is nice, techniques such as radiosity do have many advantages, even today.

I think the problem is, that even high-end cards are still not able to reproduce lighting in a quality, that radiosity can give you. Therefore in indoor games precalculated lightmaps are unbeatable in quality. The problem is, that it is very difficult to convincingly combine them with dynamic lighting.

In outdoor-games (Crysis) radiosity makes no sense and if you are outdoors anyway, you might want to show off nice weather effects, so the sun will just be another dynamic lightsource. Using perspective shadow-maps the quality is indeed pretty good, and no one will miss the real-soft-shadows, so you can get away without global illumination.

I think Doom3 showed us all, that dynamic lighting might be nice for some games, but in general one is used to very diffuse lighting and that is what players miss in games that only use hard shadows (no global illumination i mean).

So i don’t expect lightmaps to just vanish. Not as long as GPUs are not fast enough to generate nearly equal quality in real-time. And it is more likely that we see ray tracing GPUs before rasterizing GPUs reach that speed.

Jan.

Your propensity to use out-dated and soon to be unsupported OpenGL extensions, however halo, is not clever use of a tool set. People like you will hold OpenGL back.
That’s right, Halo - DirectX is all your fault. Why, if it wasn’t for you, I’ll bet that profound ‘Call to Action’ thread wouldn’t have been necessary.

:stuck_out_tongue:

Think about it; not only is it used for faces, but nowadays even for 2D textures (per-pixel on src 2D textures).
Good point, tamlin.