HL2´s High Dynamic Range Rendering

Hi

Did you see Half-Life 2´s video about their HDR-Rendering?

Here it is: http://www.fileshack.com/file.x?fid=3619

I wonder: What is soo special about it? I can do that effect pretty well on a Geforce 4. And i am sure this can even on a Geforce 1 or 2 be achieved. However i need an extra rendering pass for it.

I read in some article, that HL2 uses some real cool, absolutely new technique (HDR rendering) and that this would be only possible on the latest hardware (DX9).
So i wonder, if it might be possible - on the latest hardware - to do this effect without an extra pass? What is so special about what HL2 does? Somewhere i also read, that float-textures/buffers would be necessary for this. Is that true?

Note: The video shows only at the beginning the “glow-effect”. The rest of the video shows merely bumpmapped specular lighting (and/or bumpmapped environmental mapping).

Jan.

The nice bit is, when you’re inside the outside glows, but when you go outside, the overall light intensity is alot higher,and the sky doesn’t glow.

I’m not sure how this could be accomplished, other than via interaction from the cpu, basically telling the blur control to activate, because we’re in a dark environment.

To answer your question, I saw nothing that could not be done on a gf4. DX8 hardware has the ability for 16bit HDR textures. Not float textures by any means, but significantly increased resolution over standard 8bit textures.

Given Halo did the blur thing over trees years ago on DX8 tech, I really dont know what HL2 requires soo much PS2.0 for.

Water reflections/refractions seem to increase in quality from DX8.0, to DX8.1, to DX9.0, but theres no huge significant leap in difference.

The shadows look particularly poor. The wooden boards on the roof look like they’re floating, and the character ones looked quite jaggy in areas.

Nutty, you can easily accomplish that difference between indoor and outdoor by switching tone maps (pre-blur), and only blurring parts that are > 1.0 in intensity (post-tone-map).

What would be REALLY cool would be getting pixel value statistics out of the renderer and turning it back into the tone map selection process (a k a “auto exposure”) but because that can’t be done right now, you can have artists tag areas with preferred exposure settings and interpolate between them or something.

the real cool thing is how easy and generic you can solve it, without having to mess with anything complicated, wich you would to get it working on a gf4. on a r300, you have this automatically by default actually, only the exposure and glow you have to do yourself. the hdr is implicit.

I agree with Nutty.
I think they went all Pixel Shader 2.0 because it’s “cool” to use these, but that most of the time it’s completly uncalled for.

From the screenshots I’ve seen I can’t tell the difference either. It just seems that the DX9 screenshot is a bit brighter than the others.
We’ll have to see the final game to judge whether or not using every bit of DX9 was really required.

It seems from Derek Perez’s comments, that is what nvidia feels also. He basically says that parts that dont need to be DX9 will be reduced to improve performance on the GF-FX range.

Yeah the pixel value statistics would be nice… I suppose you could render the scene to a small pbuffer, and do a read pixels (Mmmmn… async read pixels), and actually accumulate the average intensity of the pixel data. On say a 32x32 window this wouldn’t be too slow.

Thats actually tempting to just try myself!

Why the hell aren’t fileshack using bittorrent? Then all a big queue would mean would be a faster download.
Oh, because they make money out of selling the ‘premium’ download service, of course.

Originally posted by Nutty:
I suppose you could render the scene to a small pbuffer, and do a read pixels (Mmmmn… async read pixels), and actually accumulate the average intensity of the pixel data.

You can accumulate and average without doing a readpixels call by using the Summed Area Table technique presented at gdc03 by nvidia.
http://www.opengl.org/developers/code/gdc2003/GDC03_SummedAreaTables.ppt

The trick is to render to an active texture, by slicing your texture into rows/columns and rendering line primitives. However, since each primitive call depends on the previous one, you basically have to do a finish after each. If this turns out to be faster than the entire readpixels call + cpu accumulation is difficult to say. It would be interesting to see a chart or something where the cutoff might be, taking gpu/cpu/texture size/etc into account.

[This message has been edited by roffe (edited 09-13-2003).]

To me it looked like everything was TOO shiny, like it was wet or something. Maybe that is the effect they wanted, the everything is wet b/c it just rained look on that part of the game, I dunno. It did look cool but it also looked a little weird to me.

-SirKnight

Nutty, you can easily accomplish that difference between indoor and outdoor by switching tone maps (pre-blur), and only blurring parts that are > 1.0 in intensity (post-tone-map).

Know of any places off hand that talk about this in more detail? I’ve been planning to do this kind of effect for a while, just never gotten around to it. Thanks.

-SirKnight

Whats wrong with talking about it here?

High-dynamic range rendering is first obvious use for floating point fragment shading. You can now do all your lighting in a proper linear space instead of the implicit gamma space currently used. Since your basic operators – add, mult, etc… – don’t give proper results when used in gamma space this gives you an overall improvement in visual quality.

It just requires an extra post-process to map your floating-point results to your monitor.

Originally posted by Nutty:
Whats wrong with talking about it here?

Well nothing but if there was somewhere that already talked a lot about it, i’d like to know and read that.

-SirKnight

Hm, when i did some stuff with specular lighting, etc. i would have liked to have more precise textures. But then they were to be used more in a mathematical way.

Do float textures and float buffers (for color values, not something else) really have so much impact? I really don´t see why 8 bit per channel shouldn´t be enough?
Or are there some other tricks, which can then be applied?
I would be grateful, if someone could give me a bit more background-information about this.

About the wet look: I think that everything looks really cool and realistic, EXCEPT for the shingles (? or tiles?). The stuff the monster is standing on.
Those look really unrealistic. Usually that stuff is not very shiny. And even if it was wet, it should look different (although i don´t know how, i have to wait for the next rain )

Bye,
Jan.

Originally posted by Pop N Fresh:
[b]High-dynamic range rendering is first obvious use for floating point fragment shading. You can now do all your lighting in a proper linear space instead of the implicit gamma space currently used. Since your basic operators – add, mult, etc… – don’t give proper results when used in gamma space this gives you an overall improvement in visual quality.

It just requires an extra post-process to map your floating-point results to your monitor.[/b]

Now what are you talking about?
The post-processing you refer to is already commonplace, it’s the gamma LUT in the RAMDAC and has been programmable for ages. All you need is an appropriate gamma ramp to make linear operations in pre-gamma (ie shading) produce linear luminance changes.
Proper monitor calibration is the keyword here, it has nothing to do with your shaders, nor even your rendering system.

About the wet look: I think that everything looks really cool and realistic, EXCEPT for the shingles (? or tiles?). The stuff the monster is standing on.
Those look really unrealistic. Usually that stuff is not very shiny. And even if it was wet, it should look different (although i don´t know how, i have to wait for the next rain )

Yeah the roof that the monster was standing on is the part I thought just didn’t look right. It was WAY to shiny, that’s why to me it sort of looked wet, not exactly but kind of. The other shiny things looked ok though. The part where the player was inside that wooden building thing to me was the best part of the whole demo.

-SirKnight

I just watched the demo again and the monster looks too shiny to me too. It has that kind of wet look also. City 17 must have just gotten some rain or something.

-SirKnight

Originally posted by zeckensack:
Now what are you talking about?
The post-processing you refer to is already commonplace, it’s the gamma LUT in the RAMDAC and has been programmable for ages. All you need is an appropriate gamma ramp to make linear operations in pre-gamma (ie shading) produce linear luminance changes.
Proper monitor calibration is the keyword here, it has nothing to do with your shaders, nor even your rendering system.

Ideally you want a linear relation in your lighting system. A light with a value of 2 should give off twice the energy as a light with a value of 1. When everything must be squeezed and sampled into an integer range of [0…255] this isn’t possible.

Let’s say you made the strength of a nightlight 1 and the noontime sun 255 to cover your entire range. In this system 255 nightlights are going to be as bright as the noontime sun which is obviously incorrect. Once you move to floating point you can correctly model the relative brightness of something like a nightlight and the sun.

This isn’t to say that there aren’t various techniques to get around this problem. But using a full High-Dynamic Range rendering system simply makes the problem go away.

The post-process I was talking about was assuming you would want to render to a floating point target so you could do exposure control, blends, bloom effects, etc. without a loss of precision. Since current hardware cannot display a floating point buffer directly this would need to be mapped to something it can handle for display.

[This message has been edited by Pop N Fresh (edited 09-13-2003).]

Originally posted by Pop N Fresh:
But using a full High-Dynamic Range rendering system simply makes the problem go away.

Almost, now everyone just needs to go buy a hdr display seen at SIGGRAPH for a complete system! http://www.sunnybrooktech.com/tech/index.html

Originally posted by Pop N Fresh:
Ideally you want a linear relation in your lighting system. A light with a value of 2 should give off twice the energy as a light with a value of 1. When everything must be squeezed and sampled into an integer range of [0…255] this isn’t possible.

exactly. without backbuffers supporting more than 8-bits of precision(or 10 as in 10-10-10-2, the gamma LUT is not really suited for anything other than converting from regular windows gamma (2.2) to whatever your monitor requires.

Average joe probably think that gamma correction is about brightness/contrast and not about lienarity as this is often the way it is exposed in games.