how to remove gamma correction


This Tutorial (Modern OpenGL 07 – More Lighting: Ambient, Specular, Attenuation, Gamma — Tom Dalling) says something about gamma correction. And that the image files are already gamma corrected. It is solved by using a different internal format. As you may already know I’m using Qt. And there is no srgb format. So I thought I correct it by my own.

Just one question is the formula I’m using here correct??

Yes, this is correct. sRGB -> linear is x^2.2, and linear -> sRGB is x^(1/2.2).

Not quite. it’s an approximation, but it’s not the exact transformation. Please see sRGB - Wikipedia

You should ideally let OpenGL perform the conversion.

For reading sRGB-encoded textures (or any gamma-corrected texture, which will be much closer to sRGB than to linear), telling OpenGL that the texture is sRGB will result in filtering being performed correctly; interpolating then converting manually will be incorrect. This shouldn’t require any support from Qt, just using GL_SRGB8 or GL_SRGB8_ALPHA8 (rather than GL_RGB8 or GL_RGBA8) as the texture’s format. Manually converting the result of e.g. texture2D() is only correct when using nearest-neighbour sampling (no interpolation).

Conversion from linear intensity to sRGB-encoded value on output may require support from Qt if you perform the conversion when writing to the default framebuffer. This can safely be done manually.

No, but it’s close enough unless you’re dealing with values very close to black. And it may in fact be correct for some textures; sRGB was a minor adjustment to existing practice, so some “sRGB” textures may in fact just be intensity1/2.2 rather than actual sRGB. The main reason for sRGB’s adjustment is that an exponent of less than one results in the graph having a vertical slope at zero, meaning that an encode-decode cycle loses accuracy for values very close to zero.

using the srgb format for the qopengltexture changes nothing

maybe you forgot to call: