Can lighting be implemented with texture mapping?

Anyone can help?

It can. This is how Doom III works.

amm… per-pixel attenuation map…

Operations involved in lighting computations are :

  • multiplication,
  • addition,
  • clamp,
  • dot product,
  • exponent.

All of each operation can be performed in one texture unit, except clamping and exponent which sometimes need more. Though, clamping and exponent are solely used by specular contribution. In other words, computing lighting without specular removes alot of hassle.

“All of each operation can be performed in one texture unit, except clamping and exponent which sometimes need more.”

Just because you’re asking for info, fragment programs and vertex programs makes this somewhat simplier.

It does not need to take us to Doom3. Lighting has been done using textures for years right now. Since quake1, it was pretty clear that CPUs were not able to cut it at realtime so lightmaps were created.
A lightmap is a basically a texture with pre-computed lighting slapped on surfaces. Those are very fast but cannot get the full accuracy of realtime lighting and they are static. Lightmaps were usually computed at design time and stored with the map.

Realtime lighting, as currently implemented, is much more detailed than lightmaps but it is impossible (for now) to get light to reflect on some surfaces. This is still a point for old lightmaps.
I fear it will take a while to get interreflection for realtime lighting.

An interesting alternative for ambient/diffuse is spherical harmonics, where you can express the different components using a sequence of texture maps.

Do a google and it’ll all become clear.

Originally posted by Obli:
Realtime lighting, as currently implemented, is much more detailed than lightmaps

Remember that lightmaps just stores the illumination values. The resolution and what to actually put in them is decided by the renderer that creates them.

Originally posted by roffe:
Remember that lightmaps just stores the illumination values. The resolution and what to actually put in them is decided by the renderer that creates them.

Anyway, they can be calcolated at most per-texel, while realtime lighting is anyway computed per-fragment (in most cases).
By the way, generating per-texel lightmaps is way memory consuming, it’s easy to get over 80MB of lightmaps (some professional renderers can generate much more).
As a result, take a look at most games - you’ll find that the lighting equation is evaluated with a very long period which makes it obviously less detailed than its counterpart value.
There’s always the fact than being static they are obviously less detailed since they lose a dimensionality and this is why realtime lighting really rocks after all!

Cube Mapping is an interesting and popular extension that can be used as normal maps for real-time specular lighting of dynamic light sources. Its pretty power-hungry but most newer cards have it hardware built in. NVidia has some excellent things on their website about it. Its supposid use is for specular but it states that it can do full per-pixel lighting and keep it all hardware accelerated less for any parts that need to be done to generate the maps.

Originally posted by Obli:
There’s always the fact than being static they are obviously less detailed since they lose a dimensionality and this is why realtime lighting really rocks after all!

That’s not always true.
Lightmaps computed with radiosity can not be equaled by realtime lighting (unless very special cases, those that never happen) and produce by far the best quality diffuse maps. That’s why they’re used in most games (if not all).

As for specular contribution, by definition it is view-dependant, and so forth it has to be computed in realtime.

Env. maps operate under assumption that lightsource is infinitely distant, one cannot implement local lights with it.

Same for spherical harmonics lighting. Another drawback is that to my understanding of http://graphics.stanford.edu/papers/envmap/ 9 spherical harmonics are enough to implement only diffuse lighting, as for more complex BRDF, the number of basis functions becomes prohibitely large.

AFAIK there are no really good alternatives to lightmaps for photorealistic rendering. Lighting such as in Doom3 sucks by definition since it’s not a global illumination. Of course it might work well in the sewers, where Doom3 takes place, but in other situations it misfits.

IMHO of course.

[This message has been edited by h2 (edited 07-18-2003).]

I agree that SH only works for diffuse + ambient – I even said as much in my first post! You seldom get specular highlights from secondary light, though, so you can add those using traditional methods.

Regarding doing “global” or “location dependent” SH, you can actually define a lattice of different SH samples, and store it all in some easily accessible function (voronoi regions, 3D textures, whatever) and interpolate between different global lighting solutions. I haven’t done it myself, but the demo I’ve seen was quite good.

you can actually define a lattice of different SH samples

Looks like you’re referring to the approach, which is similar to http://research.microsoft.com/~ppsloan/shbrdf_final17.pdf
I guess Kautz won’t use 25 harmonics unless it is absolutely nessesary. Thus, it still requires too many harmonics to be actually usable. IMHO of course.

[This message has been edited by h2 (edited 07-18-2003).]

I think 9 gives you perfectly fine diffuse lighting. But I agree: it’s a speed/quality trade-off.

Okay, I see that you mean not “location dependent” viewer but “location dependent” object.

[This message has been edited by h2 (edited 07-19-2003).]