HL2´s High Dynamic Range Rendering

You should only use the desktop gamma thingy to correct it to the standard windows gamma(2.2). A monitor is usually in the 2.0-2.5 range, so it should only be a small adjustment. My monitor has a gamma of 2.5 so use a gamma correction of 2.5/2.2=1.14 .

If you want to do display a linear light space, convert it to gamma space with “out = pow(in,1/2.2)”

Why is this?

Because floating point operations cost transistors.

Will be next gen. hardware have floating point blending?

Probably. But I wouldn’t count on it being fast (not that rendering to a float buffer is particularly fast anyway).

My monitor has a gamma of 2.5 so use a gamma correction of 2.5/2.2=1.14 .

My gamma seems to be 2.1, which means I set my desktop gamma to 0.95 (2.1/2.2). However, for full-screen rendering, do I set the gamma to 2.1 or 0.95?

Is it really necessary to find out the exact maximum, minimum color intensity of the scene? Wouldn’t it be enough to calculate an estimate of max/min using the material properties of visble objects and the lights which affect them?

Set it to 0.95, or let the user do it him/herself with the gfxcard drivers. Without floating point frame-buffers the precision in linear space (gamma 1.0) would be really bad so it’s better to let the framebuffer remain in gamma-space and do the conversion from linear-space to gamma-space yourself (either in software or using shaders on the gpu).

I’m probably stating the obvious but textures and colours are almost always in gamma space already so you have to convert them to linear space if you want to do stuff with them in that space. And it’s not efficient to store textures in linear-space either as 8-bit per channel in linear space doesn’t cut it.

Doing calculations in linear space if far from free, because of all the conversions. But in some cases it really pays off. Antialiasing is one thing I wouldn’t want to do without taking gamma into account (in software of course, hardware has to handle this by itself). The result is way smoother.

Is it really necessary to find out the exact maximum, minimum color intensity of the scene? Wouldn’t it be enough to calculate an estimate of max/min using the material properties of visble objects and the lights which affect them?

It doesn’t need to be precise. Indeed, it’s probably a good idea to allow for a little over-brightening (when appropriate, like looking into the sun).

so it’s better to let the framebuffer remain in gamma-space and do the conversion from linear-space to gamma-space yourself (either in software or using shaders on the gpu).

It’s bad enough to waste 2 cycles per fragment manually converting a floating-point value to a [0-1] range. Now, you’re suggesting that I waste more cycles doing the gamma correction, when the hardware itself already has gamma correction (and, in the case of ATi’s multisampling, it expects that you’re working in linear space)? At some point, you have to cut your losses on an effect. Also, blending is not really possible in gamma-space; at least, not without a gamma-space blending function. So what’s the point of having the framebuffer in gamma-space?

I’m probably stating the obvious but textures and colours are almost always in gamma space already so you have to convert them to linear space if you want to do stuff with them in that space.

How did the image texture or colors get into gamma space? If I pick a color out of a color selector (with the correct gamma value set for the monitor), the RGB values for that color should be linear. Isn’t that the whole point of the gamma-correction?

[This message has been edited by Korval (edited 09-20-2003).]

Originally posted by Korval:
[b] How did the image texture or colors get into gamma space? If I pick a color out of a color selector (with the correct gamma value set for the monitor), the RGB values for that color should be linear. Isn’t that the whole point of the gamma-correction?

[This message has been edited by Korval (edited 09-20-2003).][/b]

The whole point is that calculations are supposed to be done in a linear space otherwise every step of the calculation accumulate errors. You still want colours and textures to look the same.

In 90% of all software, when you create an image to use as a texture you do it in gamma space and thus must convert it to linear space, do your calculations and then convert the final image back. If you don’t do the gamma->linear conversion you will get the same bland greyed-out look as when you gamma-corected your desktop with the 2.1 setting.

And considering the performace issues, yes it’s very expensive. The current generation hardware can do it, but it cannot do without serious penalty.

Ideally we would like a conversion to linear space at the texture-samplers (before filtering) and higher precision framebuffers so the built in gamma conversion could be used (as the conversion 1.0->CRT and not 2.2->CRT as today) without running into stepping issues. Because of the non-linearness of gamma-space, which is very close to our eyes sensitivity, it becomes a very efficient method for compression. But without it, 8-bits per channel is gonna suffer from stepping at the darker areas of the image.

Thinking of it, it could probably work fine with a 10-bit framebuffer. I haven’t tried it yet though. 10-10-10-2 would probably be good enough for your textures that doesnt use the alpha-channel.

How did the image texture or colors get into gamma space? If I pick a color out of a color selector (with the correct gamma value set for the monitor), the RGB values for that color should be linear. Isn’t that the whole point of the gamma-correction?

Because you authored them in that space. If you created them in linear space, you shouldn’t do the conversion.

[This message has been edited by vember (edited 09-21-2003).]