HL2´s High Dynamic Range Rendering

Originally posted by OldMan:
[b]But remeber that we do NOT see the world in a camera. We see it with our own eyes (ate least I do like that). Lens flares and etc… are NOT realistic effects in MOST situations, unless you are developing a game where everyone uses glasses We need to aim for eyes… not cameras.

With the current speed of development we will need floating point framebuffer and even more advanced display devices in just a very few years.[/b]

depends on the game. for hl2, yes, but for for-example zelda, or other 3rd-person games, no. there, modelling a camera is adequate, and imho bether. so it really looks like its a movie, as those get captured by cameras, too…

best is metroid prime imho. it is the view trough the eye, but she has glasses in front of it. funny raindrops with refraction, dirt, cold gas, etc. all affecting the glass for cool effects. beautiful made.

well… as i said above, depends on the game. i like camera-recorded games. my eye will add the “i see it trough the eye”-effects anyways.

Mazy, I actually agree with you. It wasn’t so much the specularity but the lack of reflections (occur on all highly specular surfaces, specular is just a hack to replace hdr reflections anyway) and the inconsistency of the lighting. It looked like HDR specular tacked on, not like high quality realistic lighting.

Could be that the light ( and then non coherent with the look of the sky ) was much brighter in the HDR than the rest… therefore only the light has the intensity to bounce off again, and we only see the light in the reflection angle… but your right, with that diffuse sky it should have less intensity difference between the sun and the rest of the sky and thus if you have that much specular you should be able so se more of the reflected sky.

But i still think its pretty ok… And the facetted sphere looked like it suffered from the same problem that i have, that floatingpoint textures doesnt have bilienar filtering, but on bumpmapped surfaces it was less noticeable so i think i can continue without that for now on

It’s just a matter of adjustments. But anyway it must receive a few corrections, the specular is making all look like if there was a layer of glass over all materials.

In fact I ever thinked that specular stuff is too much overapplied in modern CG examples and products. I would prefer to have less specular than reality than more specular than reality.

no self shadowing on the ant lion,

Hmm didn’t notice that. Thats quite bad then. Are the shadows just character projections onto the world? I thought they were full scene depth map shadows.

Originally posted by OldMan:
[b]But remeber that we do NOT see the world in a camera. We see it with our own eyes (ate least I do like that). Lens flares and etc… are NOT realistic effects in MOST situations, unless you are developing a game where everyone uses glasses We need to aim for eyes… not cameras.

With the current speed of development we will need floating point framebuffer and even more advanced display devices in just a very few years.[/b]
I second that
I always thought lens flares were are complete waste of time.

Well, the real lens-flare effect is usually quite annoying, i think.
However, if you change the code only a bit, you can change it into a bright blinding glow. This usually looks good, it is natural (because looking into the sun is usually unpleasent) and in some games like CS it would add a strategic component, because having the sun on your back can be a disadvantage for your opponent.

Jan.

There was a point in time where publishers were lens-flare obsessed. When they reviewed games number 1 priority was “Whats the lens flare like?”

Maybe they all use glasses… big, tick ancient glasses. And meybe some of them use eyefish glasses…

you’re all playing too much stupid braindead firstpersonshooters. i dislike first person actually quite a bit. i prefer cineastic cameras and all. zelda64 was one of the best games in that design, haven’t played the new one yet (cartoony, yeah ).

for such games, camera-effects add, they make it more cineastic, and definitely look good.

yeah, there was once a f1 game with a lensflare. you could laugh or cry about it. circle-round colourful rings around a sun. and that was it. same blue sky, no enlightment, or anything. f1 world champion ship on n64 showed it bether years ago… driving in direction of the sun, and the fresh-built street reflected the whole sun, and the sky got white of brightness, and a huge lensflare (not a circleperfect one btw ) filled the screen. result was you got nearly blinded for a moment. it looked VERY natural and good.
and it gave a strategical point, yes, too… it isn’t that easy to drive blind

Originally posted by Nutty:
Hmm didn’t notice that. Thats quite bad then. Are the shadows just character projections onto the world? I thought they were full scene depth map shadows.

Well in the previous videos it actually looked much worse - haven’t checked if that was improved:

in the first fight through the city, where you moving in team with some bots, you can see that the lighting of the characters and of the buildings is not done in the same way. When going through a heavily damaged house with holes in the wall, the characters are either fully in light or fully shadowed; the buildings do not project shadows on the characters.

Yeah I pretty much figured the world was just using static light maps, or shadow maps as the case may be.

I remember one of the videos showing the bump-mapping feature in HL2, but I saw a website showing the poor quality of dynamic lights in HL2, and they looked like per vertex dynamic lights, so I haven’t seen any use of the bump-mapping in the videos. Unless its just used for ambient bump-mapping.

Since current hardware cannot display a floating point buffer directly this would need to be mapped to something it can handle for display.

This is the question that I haven’t seen the answer to yet. I understand how to make computations using HDR. But, how do you effectively turn an HDR buffer into one that can be displayed?

Do you just find the minimum and maximum intensities and use them to convert it to the [0, 1] range in a linear fashion (ie, subtract from all colors the lowest and divide by the difference between the lowest and highest)? Or do you convert it to [0, 1] via some other means?

Originally posted by Korval:
This is the question that I haven’t seen the answer to yet. I understand how to make computations using HDR. But, how do you effectively turn an HDR buffer into one that can be displayed?

The answer is called tone mapping, which should be familiar to anyone how has developed their own film.

[b]

Do you just find the minimum and maximum intensities and use them to convert it to the [0, 1] range in a linear fashion (ie, subtract from all colors the lowest and divide by the difference between the lowest and highest)? Or do you convert it to [0, 1] via some other means?[/b]

You have just explained one way of doing it.There are many others. The obvious problem with a linear scale is that a light source visible in the scene would be thousand(or whatever scale you use) times brighter than any surface. Since most people still use monitors/displays with a limited 8-bit per channel output, the lightsource would be the only thing visible. I recommend this dissertation which has a good summary of previous works.

Excellent paper (haven’t finished. Will finish later). I think the “manual” setting of the aperture is the right way to go, so that if the sun is sitting on the horizon, you can darken the scene while still allowing the sun to over-brighten significantly.

Granted, rather than using one of his computations, I’ll likely set the value myself given what I can determine about the scene from the CPU (is the sun visible, different sectors may have different aperture values, etc). After all, I may want to simulate the effects of going into a sunny day when one was just inside a dark cave (massive overbrightening that shrinks down over time), or the reverse (lots of darkness that slowly resolves itself).

BTW. Given that float buffers do not (yet) offer blending operations, how do you deal with multi-pass operations and HDR? Is it economical to swap float buffers and pull the old value from the buffer you just rendered to (especially considering that binding a new buffer as a render target likely induces a significant stall in the pipeline)?

BBTW. Before I start playing around with this, is there a program or a web site or something that I can use to test/set my monitor’s gamma? I want to make sure everything’s linear before going into HDR.

[This message has been edited by Korval (edited 09-19-2003).]

Anyone care to explain exactly why we need a floating point frame buffer?

I see no need for the frame buffer that is scanned out to the monitor by the RAMDAC to be anything but fixed point.

Even if we had front and back buffers that were floating point you would still need an ‘exposure shader’ to convert the floating point values something between 0 and 1 for the RAMDAC. There is no need for such a specialized programmable stage. Just use a fragment shader written as an ‘exposure shader’ to copy a floating point pbuffer to a display buffer.

All that is really missing is blending on float buffers.

Originally posted by Korval:
BTW. Given that float buffers do not (yet) offer blending operations, how do you deal with multi-pass operations and HDR?

I use the “front/back pbuffer ping-pong” technique if you want to call it that. It works, but I can’t say more than that. Might be too slow for interactive rates.

Originally posted by Korval:

BBTW. Before I start playing around with this, is there a program or a web site or something that I can use to test/set my monitor’s gamma? I want to make sure everything’s linear before going into HDR.

http://www.cbloom.com/3d/gamma.html

I don’t think the instructions on that link works correctly. When I set my desktop to the proper gamma value (for me, around 2.1), everything gets way too bright. Is it something that should only be set for full-screen rendering modes, or should it be set for the desktop too.

It seems that blending is only supported in those formats that a frame buffer can have?

Why is this?
Will be next gen. hardware have floating point blending?