GLSL manage multiple lights


since every light in a scene could possibly effect any object, all lights of a scene have to be set before rendering the objects. So the solution here are arrays, UBOs or Textures. But even if with UBOs/Textures one is probably able to store more lights than one would ever need, it remains some kind of static. So there is always a fix number of lights that can be handled, and this number must be known at compile time.
Is there any solution to this? Is that (array, UBO) really the “standard-way” of handling multiple lights in a scene?

Thanks for your help.

Is there any solution to this? Is that (array, UBO) really the “standard-way” of handling multiple lights in a scene?

What you’re talking about has never been “standard” by any real definition of that term.

Indeed, nowadays, there is nothing you might consider “standard”. There are several options.

The first is to stop pretending that the number of lights you can render with in a single pass limits the number of lights you can use in a scene. AKA, render each object multiple times, once for each light that affects it (that last part is important. All of the lights in a scene do not necessarily affect every object. Figure out which lights affect which objects). You add the values together to compute the composite result for each object. This is the closest thing to “standard”, since it’s been done since the days of Quake.

The alternative that is becoming increasingly popular is deferred rendering. It’s a bit more complicated to explain.

In order to compute the reflected color from a point on a surface for a particular light, you need two things: information about the light and information about the surface. The light’s information (position, intensity, etc) is constant for each light. What changes is the information about the surface, commonly called the “material parameters”. These are always the same for each object.

Deferred rendering is an attempt to leverage that last fact. You don’t render lights anymore. You render the geometry, not to the screen, but to a set of off-screen buffers, commonly called G-buffers. The G-buffers contain the material parameters for the object in the scene at that point. G-buffer contents typically include a normal, a diffuse color (sampled from a texture), maybe a specular shininess or something similar, and so forth. Oh, and you keep the depth buffer around.

Then, for each light, you render a full-screen quad. The fragment shader reads each of the material parameters from the G-buffers, then computes the light intensity based on the current light that this full-screen quad is working with. It reconstructs the position in an appropriate space via math that I’m not going to get into. With the position and the normal, and the other material parameters, you have everything you need to do lighting.

Deferred rendering is more complex to set up. But it is a pretty nice system. It cuts down a lot on the number of shaders, because “forward rendering” (the first method) combines the material stuff with the lighting stuff.

Alfonse touched on these but the main options (for dynamic lights) are:
[ol][li]Forward Shading: One pass for all lights[]Forward Shading: One pass per light (with additive blending)[]Deferred Rendering[list=1][]Deferred Shading[]Deferred Lighting[*]Light-indexed Deferred Rendering[/ol][/LIST]Sounds like you’re thinking most about #1 – applying all the lights in one pass, and sounds like you may want a lot of lights. Besides how to pass all these lights into the shader, think about performance: a big issue is that most lights don’t affect most pixels in the scene (and in-fact may not even be visible), so (with lots of lights) you end up with a horrendous amount of wasted effort in the shader computing a black contribution from most lights for most pixels. This eats frame time for breakfast, and at the very least, keeps you from doing something useful with the GPU for all these wasted cycles. You can try to do CPU work to “sort” objects by light to avoid applying lights to objects that they don’t affect, but this hits your CPU perf (sorting, extra state changes, splitting batches, etc.), and you end up sending some objects down multiple times because you can’t get things sorted cleanly. …thus Deferred Rendering techniques.[/li]
I do want to clear up some terminology confusion with Deferred Rendering though. What Alfonse described is actually Deferred Shading (which is one type of Deferred Rendering technique).

Forward Rendering refers to the usual case where visibility, determination of material properties, lighting, and shading all happen in the same pass (at least for one light).

Deferred Rendering techniques “defer” some of this computation for a later pass (or passes). And typically they do this by sampling “…something” at the first opaque surface for every pixel (or sample).

Deferred Shading: think of it loosely as smashing all your surface materials onto the framebuffer, and then going back and applying lights to them in subsequent passes.

Deferred Lighting: think of it as accumulating light irradiance for your surfaces on the framebuffer, and then going back and applying materials to them in subsequent passes.

Light-indexed Deferred Rendering: think of it as bookmarking the light sources that affect each pixel, and then going back and applying those light sources to each pixel.

Some further reading on Deferred Rendering techniques:

Main challenges for Deferred Rendering techniques: FSAA and translucents. Can be dealt with though.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.