HL2´s High Dynamic Range Rendering

Pop N Fresh,
okay, that was a misunderstanding then. I thought your point was correcting some non-linear-ness of lighting.
HDRL should have an adaptive range but is still linear. That was my point.

MAN!!, I wish I knew as much as you guys :slight_smile:

What? You’ve never heard of bittorrent?

if you don’t understand why HRD rendering is needed to archieve good lighting results, read this first:
http://freespace.virgin.net/hugo.elias/graphics/x_posure.htm

and if you want more theoretical input about light:
http://freespace.virgin.net/hugo.elias/graphics/x_physic.htm

Really nice article. It passed the idea in a simple way. But I still don’t understand how can it be implemented with all this floating point stuff. Is it just a mather of more precision between lower and higher value for bright? Or the idea of an exponencial function would be used too?

If not… can anyone give a link to a paper or an article that explains how could it be implemented in real-time graphics (not raytracing stuff, that I already can figure it out how would be done)

with fixedpoint you normally have a range from 0…1 or -1…1 or on some you have up to -8…8… still, its a very small range.

floats can store ranges from several billionzillions, but with very high detail. we all know that we use floats in general use because of this.

so with floats you can store any value, and this is exactly what we need for hdr lighting… you can store sunshine at 1000000.f and the small candlelight at 0.01f, and do the same math in both cases and you get the correct result in the end.

thats why floats are so useful. sure, 32bit fixedpoints would rock, too, and if i’d be able to use 128bit fixedpoints i would not use floats anymore but they are a handy middle-tool. they can express a huge range (a “high dynamic range” ), and have quite a lot of detail in the small numbers…

how it could be done? with ARB_fp… by just doing it. store in some constants there the light values, and do your perpixellighting. HDR is actually rather AUTOMATIC if you use ARB_fp. all you have to do is store lightvalues in a high dynamic range instead of normally colours from 0…1 and done.

what you need to do in the end, to “compress” it onto screen, is to use some sort of exposure function.

Ok, thanks So i was correct in my idea that is not a float aspect. Just a matter of my word is bigger than yours …

But that would have good results only if using the shaders for ALL stuff in the rendering. I don’t think hardware (except for really high end stuff) can handle this (I may be wrong ).

Anyway it could be useful to have a feature that enables this kind of stuff automatically.

Originally posted by OldMan:
[b]Ok, thanks So i was correct in my idea that is not a float aspect. Just a matter of my word is bigger than yours …

But that would have good results only if using the shaders for ALL stuff in the rendering. I don’t think hardware (except for really high end stuff) can handle this (I may be wrong ).

Anyway it could be useful to have a feature that enables this kind of stuff automatically. [/b]

if you have a todays high end card, like any radeon9500 or bether, you can do this, and it is rather automatic supported. as i said. if you use ARB_fp, you get floatingpoint in the pixelshaders, and as you have that, you can directly use lighting values in a high dynamic range instead of the 0…1 range wich is used by default. entierly doable and entierly FAST doable on todays hw.

all gfFX support that, too. they just have some problems with floatingpoint texture support, and are quite slow at doing floatingpoint ARB_fragment_programs compared to the radeons.

Nice discussion.
But in the end, isn’t all this HDR stuff about using a logarithm instead of using directly the brightness value?


DrTypo

so with floats you can store any value, and this is exactly what we need for hdr lighting… you can store sunshine at 1000000.f and the small candlelight at 0.01f, and do the same math in both cases and you get the correct result in the end.

I still dont get it, there must be more to it. If you basically do diffuse on surface, you end up just having huge color values, which then just equates 1,1,1, when its clamps to the framebuffer. That right?

If its linear, I dont understand why 8bits cant represent it. Just a normalization of the HDR values. Sure its alot less resolutions of lighting values, but if you only have 1 light in the scene what difference does it make if its 10000,10000,10000, or 255,255,255 ? The end result is still clamped to 8bits per channel anyway.

Clamping is a very bad way to approximate how the eye and/or a camera works. It looks crap to be honest. In the real world the sun can easily be 1 million times brighter than the light in a small room on a cloudy day. To represent that kind of dynamic range you really need floating point.

I dont think you read what I wrote at all.

I know floating point offers greater dynamic range. But regardless of what format you use, it will still get clamped to 8bits per channel in the framebuffer…

Or does it just make internal precision alot better, when accumulating light intensities?

yes, it will still be clamped in the framebuffer if you just write a value to it.

The real use for HDR is while doing the calculation. Lets consider motion blur as it is easy to explain.

Example:
You have a pixel with the value 8 that is motionblurred in the x-axis by 4-pixels.

the correct output of that pixel would be [2 2 2 2] since it still has the same energy. But if this pixel has been clamped (to 0…1) by the buffer before the blurring, the result would be [0.25 0.25 0.25 0.25], just leaving a greyish mess.

Now, this is not an especially good explanation, but it might give you an idea of what the problem is.

Nutty,
the idea is to scale the accumulated lights back to some range that fits, aka exposure. This scaling should be dynamic, based on the average intensity of a scene, and is akin to the way real cameras work, and also how your eyes adjust to environments.

The portions of a scene that still exceed 1 after downscaling are then essentially ‘blindingly bright’, like looking directly at the sun in an otherwise pretty dark ambience (overcast weather or something like that), or highlights on shiny surfaces where the actual light source is not visible.

This stuff can then be decorated with glow effects to simulate the loss of vision around blinding light sources.
You can also make the top of the range non-linear to emulate saturation and make blind spots stay ‘on the eyes’ for a little while.

nutty, read about exposure. link is above given. you don’t need to CLAMP to 0…1 in the end. you just need to range-compress in some way. exposure is much bether suited. and you can add glow to the overbright parts. and if you do motionblur, it looks much nicer if you have brighter than white ones (as well as depthoffield and all other blurs), because of the accumulation then.

you still don’t get it? get a radeon and watch the demos

Nutty,
the idea is to scale the accumulated lights back to some range that fits, aka exposure. This scaling should be dynamic, based on the average intensity of a scene, and is akin to the way real cameras work, and also how your eyes adjust to environments.

Right, thats what I wanted to know Thanks.

you don’t need to CLAMP to 0…1 in the end.

Dave, you cant get out of the clamping, thats what I’m saying. Until we have floating point frame-buffers, and floating point ram-dacs, it will ALWAYS get clamped. And that aint gonna change for a good few years.

So basically when you said,

how it could be done? with ARB_fp… by just doing it. store in some constants there the light values, and do your perpixellighting. HDR is actually rather AUTOMATIC if you use ARB_fp. all you have to do is store lightvalues in a high dynamic range instead of normally colours from 0…1 and done.

It isn’t just done, there is more todo, such as scaling back to a given exposure, which is not automatic.

hmmm… need to try it now… but I have no dx9 card… bah…

nutty, there are more ways than clamping to compress -inf … +inf to 0…1. one would be repeat would be rather stupid, but at least a fun effect after exposure, you don’t need any clamping. after exposure, your values are in the range 0…1 if they where between 0…+inf before (wich they normally are in real lighting environments )

exposure IS a way without clamping.

and there is NO need for a HDR screen. we can watch movies on tv and they look real to us. and THIS is what our current target in gamedev is. "cinematic gaming, cine-fx ". now to do this, we have to simulate first: a correct lighting environment (impossible, but hdr is a BIG step in the right direction), and second, a correct recording camera. again, impossible, but with exposure, glow, depthoffield, and some other effects (dirty lense, water on lense, etc), we get rather near there, too.

this camera IS an important part, and a camera is by default NOT hdr-capable. so that is no problem.

But remeber that we do NOT see the world in a camera. We see it with our own eyes (ate least I do like that). Lens flares and etc… are NOT realistic effects in MOST situations, unless you are developing a game where everyone uses glasses We need to aim for eyes… not cameras.

With the current speed of development we will need floating point framebuffer and even more advanced display devices in just a very few years.

So I was finally able to download the HL2 HDR video from the straining servers. Is it just me, or does it look kind of crap? They have this really low contrast low resolution, non HDR static lighting, no self shadowing on the ant lion, and then extremely aliased huge ass specular highlights from a clouded sky? Is it supposed to look like stuff is wet or what? That would need reflections and much darker materials. Honestly, that looked really bad.

Now were getting at the question “why so much specular”, and the fact that most people thinks that everything looks like plastic…

I do agree that much looks like plastic, or wet, but thats not because of the specular, but the lack of other global lightings… in fact, a sunny day, most stuff has much more specular on them than we should guess ofcourse you need the right angle between you, the sun and the object, but not many of my friends even believed that nearly everything from treeleafs to asphalt to different mortar actually have quite a powerful specular on them, not to mention steal and everything thats painted. The lack of shadow i can agree on, but i dont think the specular was that overused.