i’ve just read that tutorial about HDR:
the author suggests that to be able to store a bigger range of values into the color buffer, a GL_RGB16F (or higher) FBO texture is required. what i dont get is: why ? cant we just do these 2 “post-processing” steps (tone mapping and eventually gamma correction) in the original program that draws all the scene ? so whats the point of having higher precision FBO attachments ?
cant we just do these 2 “post-processing” steps (tone mapping and eventually gamma correction) in the original program that draws all the scene ?
Theoretically… yes. So long as you’re not using any form of multi-pass rendering where you have to accumulate lighting data by successively passing over the same objects/scene data. So you can’t do it with deferred rendering. Or with generalized forward rendering. Or many other techniques. The only way to work with this is to render each object with a fragment shader that iterates over all of the lights that affect it, accumulate these values, and then do tone mapping and write to an sRGB framebuffer image.
Also, it becomes very difficult to do a number of other things. Bloom, for example. Since you’ve lost the original HDR data (only storing gamma-corrected/tone-mapped values), you can’t effectively access the information you need to control bloom, which is ultimately based values beyond a certain HDR value. Also, it becomes difficult to do a number of transparency effects like scattering, since many of them are based on the absolute intensity of a light source. Post-processing fog and such things don’t work. And so on.
So really, you may as well do it the right way the first time.