Dynamic Range

so. now we have the point where you are wrong. and its in fact the topic
you are RIGHT with your idea how to get a value that helps to adjust the exposurefactor. i never stated thats a bad idea. i never stated thats a bad idea to sum up what you see, because thats how the eye does it.
you’re not right as well that i did not provided you a solution to the problem. i gave you a solution, you just don’t believe its one. but it works, i’ve tested this, nutty tested this. we used it for glowing / blurring of the screen. but all you need to know is, you can with recursive scaling down the texture get the average color. and this average color is a very good starting point to get your exposure-factor. for the rest, i’ll suggest you take a piece of paper and a pen and draw some curves of exposurefunctions and try to find a way how to adjust the constant factor of it. how to do that i don’t know, we can discuss this if you want (i bet you want to ).
the suggestion of the other, summing up the specular visible objects, is good as well. in fact, what he suggests is doing the same i suggested, but doing it offline, in your scene, instead from your screen. why this? because a) you have floatingpoint there and b) well, thats about it…

the suggestion of the other has the problem that it is not a perfect solution but more of a stochastical approach. a) he suggests to only use the bright objects, the specular hightighs in fact, b) you normally don’t know exactly what is on your screen, and how big the speculars are, so how bright in the end they are…

problem of my approach is once again, old hardware. because you render the colordata wich gets yet clamped, you’ll **** your result up, if you have too much values clamped down to 1. but you could a) use this as a feature, not as a bug or b) search another way…

how to use it as a feature:
say your sum-up is more than .5, that means half or your pixels will be 1 (possibly!). that means, about half of your pixels could be clamped. so dark the image a bit down. next time you do the same etc… that way your eye needs time to adjust for big changes… means the sum is say bigger.5 you make it darker, if its lower .1 make it more bright. that is a statistical approach wich, i bet, fits very good to the eye, because that don’t know the values as well, just oh ****, there are a lot of cells on the max-brightness => close the lense. oh ****, most of the cells are not used at all =>open the lense. without knowing how much…

thats more text than i wanted to write

I must have not made myself clear (how does that not surprise me) I totally believe your solution about scaling down the texture. That is why I kept suggesting automipmapping as a solution.
Automipmapping would create that 1x1 texture just by loading the framebuffer as a texture (without the recursion). The only thing I pointed out was that you did not mention that you needed a half texel offset for the box filter to work.

So, what the hell were we ever arguing for?

Now, about clamping. One goal I have is to prevent clamping on limited range hardware. On hardware with limited precision and range, I was thinking of using the exposure function as a prescale on the lights. Dim all the lights so that they do not clamp too much. On high precision / high range hardware I would just use the light intensity values raw and scale down/up at the end.

Basic example, if I have two lights with intensity 10000 and 5000, I could scale the 10000 so that its .667 and the 5000 so that its .33, so where they both add up on a white surface it ends up being 1

This naive approach will make the scene too dark, so I’ll use the average brightness of a previously rendered frame to find out about how bright the scene actually is (any one of the methods mentioned should work to some degree to provide this number), and use that to scale it. Because I am using the average and not the max, they can still add up past 1. But I see this as a feature (like you said).

So, because of this, range is no longer a problem, just precision.

I saw Nutty’s glare demo, its really cool, and one of the reasons I am interested in this (besides the double-bright gamma trick in Quake 3, and a hint that Doom 3 handles this problem in some fashion).

I am going to test rendering the scene to a 128x128 texture, then downloading the 1x1 auto generated mipmap. I have never used auto-mipmapping before, so there is probably some big gotcha, like you cannot access the generated mip-maps. If not, I would need a fall-back anyway, so I’ll use your solution. Hopefully it would be fast enough not to cause a hitch every half second. A 64x64 may even be large enough to give a good idea of how bright the scene is.

I should be able to bang something out this weekend.

automipmaping always does pointsampling on my gpu with newest drivers, even while nvidia is stating thats not true…

and the .5pixel shift is a) logical and b) an excercise for you, i said that before

>>My thought was getting the average brightness of the screen. I think you are saying thats a bad solution.<<
hm… that statement told me you did not understood that i supported your way

>>Anyone know how the auto-exposure on a camcorder works?<<
i guess the way i said. if there are too much fullwhite pixels it closes, if there are too much fullblack pixels, it opens up. thats quite easy, just do some statistics about this…

i’m not sure how your way works exactly (the measuring and the reacting on the measure), but best is you try it yourself.

hope to see something fancy, but somehow i doubt it (colors on 32bitrgba systems are simply not cool q3 is an exception there yes…)

Okay, I completely agreed with your method of finding an average, but just because you provide a solution to someones problem, doesn’t mean you agree that the problem they are solving needs to be solved Just a misunderstanding.

I will code my test program in such a way that I can easily try many different ways to calculate the exposure.

Counting the number of ‘pegged’ and ‘floored’ CCD elements sounds intuitively like the way a piece of hardware would work. I’ll try that, along with averaging and finding the max and min. I’ll throw in normalizing all the light values, and a manual exposure control to round things out.

Can’t wait to port it all to DX9 class hardware, where it should look a bit better.

I have been using the term ‘virtual iris’ and ‘exposure’ interchangably, but I now think that they may be different things.

Exposure represents the logarithmic response of film, CCDs, an the cells in the eye to light. That means that even if the sun is 1000 times brighter than candle, we may only perceive it as being 3 or 5 times brighter. Exactly like how a sound has to be about 10 times more intense before we will say that it is twice as loud.

The virtual iris is different, it represents how much light is let into the camera/eye and it scales brightness linearly.

So, you scale the intensity of the light linearly to represent the iris, then logarithmically to represent the response to light, then you clamp it to represent the limits whatever system you are simulating.

To put it in camera terms, the virtual iris is the camera’s aperture (f-stop), and the exposure is the film speed (the faster the film, the more dynamic range it has).

To keep things simple, I guess the exposure should be constant like it is in real life (although I bet your eye’s exposure changes as you get older), and the only thing that should be variable is the virtual iris.

well i had the problem that i always thought about the exposure adjustment, and for calculating the real exposure, you need highprecision bigrange colors, no real way around it…

There is an nVidia demo showing high dynamic range and exposure, so its not strictly required. But its probably not something you can do in anything other than a demo

I think the best I will be able to do without a floating point frame buffer is normalize the lights so that are unlikely to clamp. My light ‘intensity’ will not actually be intensity, but the perceived brightness, which means I would end up scaling the light intensity by the exposure per light instead of per pixel. The linear interpolation in the framebuffer between ‘bright’ and ‘twice as bright’ will be wrong because it should be logarithmic. But, I think I will have to just implement it to see if it looks bad.

you can’t do it witout high range colors. eighter you have them, or you hack them. next hw does have them. the nvidia demo does hack them. but hacking something on hw that doesn’t exist normally is only useful for demos (while there pretty cool)