Lighting with exposure

Hello everybody. Here’s something I’ve been wondering about, please tell me what you think.

Now, usually lighting in scene is accumulated by starting from black (or ambient) and subsequently adding contribution of each light. While this is intuitive, it doesn’t give very good results, especially if your lights have high dynamic range. And even a couple of low-intensity lights can create unpleasant clamping errors. A common way to solve this is to render with unlimited range and perform an exposure pass afterwards. Unfortunately this requires very sophisticated hardware, so this approach isn’t really available for those aiming for wide audience.

My suggestion is to change way light is accumulated in frame buffer to simulate exposure in the rendering phase. Instead of black, start rendering from white, and modulate it with the inverse of each light’s contribution. When all the lights have been rendered, invert the framebuffer. This way the result doesn’t get clamped, no matter how many lights there are. It also seems to correspond to the way light affects real camera film, where the effect of light on the film is propotional to the amount of remaining active chemicals on the film. And according to my dim knowledge of human eye, it should do pretty much the same thing.

The only thing left to worry about is to somehow take exposure in consideration when calculating single light’s contribution. With vertex lighting this is trivial, the register combiner hacked lighting models I’m not so sure about. Perhaps some approximation would do, or even just applying exposure to the light’s intensity alone and then render it as usual.

Anyway, what do you think? Is this correct?

-Ilkka

That doesn’t work. Assume I have two objects, A and B, which are “near” one light each, La and Lb. However, they’re actually within the light radius of both lights. Your proposed scheme would then do this:

A = white
B = white

Draw light La:

A = A - 1/La(A)
B = B - 1/La(B)

Draw light Lb:

A = A - 1/Lb(A)
B = B - 1/Lb(B)

Now, La(B) is a very small contribution, and Lb(A) is a very small contribution. However, the inverting here will cause 1/La(B) to subtract a whole lot from the brightness of this object.

If you try to do modulate instead, you run into basically the same problem: a distant light that doesn’t contribute much will still cause the object to go very dark.

You can start hacking around this by scaling the amount you subtract by one over the square of the distance to the light or stuff like that. Once you’re done compensating for all the bugs, you’ll end up with:

A = La(A) + Lb(B)
B = La(B) + Lb(B)

which is the first model you suggested :slight_smile:

If you want to do HDR, you have to have HDR compatible hardware.

I think there occured confusion in meaning of ‘invert’ word: it could be either 1-x or 1/x .

exposure(L) := 1 - exp(-k * L)

inv_exposure(L) := 1 - exposure(L) == exp(-k * L)

exposure(L1 + … + Ln) == <a bit of rewriting> == 1 - (inv_exposure(L1) * … * inv_exposure(Ln))

So, theorethically this is correct, you combine lights with multiplicative blending, and invert at the end.
I had once idea like that, but i did’t any tests to see effect in practice. I think with some massive multipass + CopyTexImage() + dependant-reads it is doable on nv20/R200.

That STILL doesn’t work, because a light that influences you, but is very dim, would make all of you go very dim, even if there’s another light that influences you and is strong.

That’s why additive accumulation of light is how it’s done – that most closely models how it actually works.

but the light is accumulated additively - right before taking exposure function: final color = exposure(L1 + … + Ln).
Math properties of the exposure function allow transforming whole equotation to equivalent multiplicative blending.
What exactly do you think is wrong?

light that influences you, but is very dim, would make all of you go very dim, even if there’s another light that influences you and is strong

No, jwatte please do the math:
dim light means L is near 0
inv_exposure(near 0) = exp(-k * (near 0)) = exp(near 0) = near 1
modulation by near 1 means almost no contribution from the light.

When the light is very strong, then inv_exposure() is very low, total product can only be less or equal (thus no other light can make result dimmer), and the final (post-multiplication) inversion turns that very low value into bright light.

Note that inv_exposure() == 0 only for infinite light intensity, and the results in 100% white regardless other light sources

I’m open for arguments, but so far it works as expected.

[This message has been edited by MZ (edited 01-08-2003).]

Yeah, now I get it. I was too hung up on the missing details in the original post that led me down the wrong thought path.

I wonder how you’d do the exp() with sufficient precision on pre-float hardware, though. And if you have float, well, you can do HDR the simple way :slight_smile:

I recall reading a paper that talked about doing exposure with exp() too; your description reminds me a lot of that. I wonder if I can find it on the web again.

Hello again.

I’m sorry, if my explanation was confusing. I actually tried this out yesterday, and it seems to work pretty much as I expected it to. It was pretty easy to hack in my shadow test code, since it really only requires changing blending mode and inverting frame buffer colors in the end.

The framerate drop, mainly caused by an additional fullscreen quad doing the inverting, wasn’t really noticable. I didn’t see any precision problems, altough I did test quite a limited number of lighting conditions. The visual results look remarkably better than additive blending when there are more than one light in the scene. My graphics card is Riva TNT. I could maybe post some pictures if you’re interested.

A problem does arise, however, when doing soft shadows. Normally I would first render shadow opacity in destination alpha, and then multiply light contribution with it while adding the light. But if lights are accumulated with multiplicative blending, a more complicated blending function is required. Perhaps this can be worked around somehow, haven’t really thought of it yet.

-Ilkka

Could you tell more details like do you compute the exposure or do you just use 1 - lightColor ?

Could you post some screenshots? I’m intrigued by all this…

Hi,

At the moment I use opengl lighting and blending mode scrzero+dst(1-src_color), so that corresponds to 1 - lightColor.

I know that only solves half of the problem, since you still can’t use very bright light sources, nor can you adjust exposure manually. But if you want to do it right, just calculate your own lighting on cpu, apply exposure on the intensities and use those values instead. That way the results should be pretty identical true HDR rendering.

Inverting the framebuffer is done by rendering a white quad over the scene with blending mode src*(1-dst_color)+dst*zero.

-Ilkka

jwatte:
I wonder how you’d do the exp() with sufficient precision on pre-float hardware, though.

Precision of exp() alone is not a problem, dependant reads can do miracles. All computation requiring unbounded range can be encoded into single LUT 2D texture. You access it by 0-1 ranged coordinates, and get 0-1 ranged result.
The texture require recomputation each time light intensity or exposure coefficient (k) changes.

What i’m really worried about is:

  1. low speed of dependant reads (at least on my GF3)
  2. precision loss when doing multiplicative blending in exponential space - where several lighst overlap.

It is hi time to see if it really works, so i just started writing he test app. I have base app to modify, so it shouldn’t last long.

JustHanging:
A problem does arise, however, when doing soft shadows

The whole trick assumes that when you evaluate the exposure(L), the L should be result of full lighting eqoutation (of single light), so L should have shadow coefficient already included . I don’t know what will be result of multiplying “exposured”, non-linear value by any other, linear value, but i’m afraid it will be distorted. I see no way of doing exposure(L) correctly without dependant reads.

I could maybe post some pictures if you’re interested.
Yes, I am.

Hi,

You are right about the shadow thing, it will be distorted, but i kinda thought it wouldn’t be so bad. Soft shadows are tricky to get from one pass unless you use shadow mapping, which is damn hard to get working in an actual 3d-engine.

I’m curious to see where you get with your dependent reads, that way you could get it working with per-pixel lighting. But I think this trick can be used without it too, if not for the mathematically correct image, at least to improve image quality a bit.

I took a couple of screenshots from my test app, but I don’t have access to my web space from home. I’ll post the images some time tomorrow.

-Ilkka

jwatte :
I recall reading a paper that talked about doing exposure with exp() too; your description reminds me a lot of that. I wonder if I can find it on the web again.

Maybe it’s the link you were looking for : http://freespace.virgin.net/hugo.elias/graphics/x_posure.htm

That’s a great link, I’ve seen it too.

Here are the images I promised:
http://www.hut.fi/~ikuusela/images/compare.jpg
See how the light in the normal images gets clamped and produces ugly artifacts. Parts of the second shadow get totally washed out by lights 1 and 3. If the lights were omnidirectional, two lights would be enough to cause these problems, and third light would make the scene mostly white. The new method is free of all such problems. First two pictures are identical, since I don’t apply exposure to single light’s contribution.

Also, I think the exposure function can be pretty well approximated by

[i]Exposure(x) = 1-(1-x/a)²

[/i]where a is a constant that can be derived from the k in original exposure function. This could be computed with one texture unit without dependent reads if you pre-divide light intensities with a.

Don’t know about the precission issues though, guess I’ll just have to try it out.

-Ilkka

looks great. smells like something i have to dive in on my radeon9700pro. no precision issues, but it looks like it solves the lightcummulating precision issues (as there you have clamping and only 32bit color on the radeon as well except you use a floatbuffer, of course… but then no blending… and it starts to get really messy )

anyways, it looks at least great. and thats what its all about, not?

But how do you get the color modulation in there? If you want to do diffuse+specular for one light in one pass, you have to do it with the color map involved at that time. That seems hard to express using any kind of regular texture environment functions.

If you modulate color map at the end, you could do it instead of the fullscreen quad, by using BlendFunc( GL_ONE_MINUS_DST_COLOR, GL_ZERO ), I suppose? Except then all your specular will be modulated the same way as diffuse.

Yeah, that’s a problem. It would be best if you could just use the diffuse map for every light. I think that would make it look right, but it would be slightly different from the old lighting models.

Diffuse light, for example, wouldn’t have it’s maximum intensity defined by the base texture. I don’t see any physical reason why it should, though. I’m gonna try that out for sure.

I think the textures should be somehow preprocessed before use too. Since they are usually based on photographs or human perceptions, they already have exposure applied on them. So using them as they are would result in double-exposure. Anyway, applying inverse exposure shouldn’t be too hard in load time if you can make some assumptions about the original exposure settings.

-Ilkka

Oh yes, and you don’t have to render diffuse+specular in one pass. Specular is just added on top of the diffuse, so when useng exposure, it can be modulated to the framebuffer.

-Ilkka

Ok, another update. I hope someone still reads this so I’m not just talking to myself here…

I implemented my approximation formula, so now I have real exposure with adjustable exposure constant. It works as long as light’s intensities remain below a certain limit. If the limit is exceeded, the light must be rendered with two or more passes. Some precission problems occur at dark areas of the image, bright areas are ok. I don’t know how well texturing will hide this, I havent added them yet. With 10-bits per color channel the precission issues should definetly go away, aren’t some of the new cards capable of this?

Unfortunately I don’t have any real exposure-capable renderer, so I can’t compare the results.

-Ilkka