OpenGL and color management

I’m a litte confused as to how opengl interacts with the color management system in the OS.

Are fragments written to the framebuffer transformed by the active monitor profile before scanout or does opengl just ignore this?

Assuming opengl takes color management into account, what is the color space of the framebuffer? I know there is a sRGB framebuffer extension, so if that is used, it is obviously sRGB. But if that extension isn’t used, what is the color space then?


– Anders

I am not aware of OpenGL taking such things in account. What it does, it renders an image in certain format to a window. Usually this format is rgba. So if you write a RGB(1, 1, 0) pixel to a window, it will be RGB(1, 1, 0). This is from GL point of view. If the window system uses some sort of gamma correction etc., this correction will be applied afterwards to the window image. To make it short, this is a linear RGB space, if you query an RGB render target.

I am not talking about gamma correction, but color spaces. RGB is not a well defined color space, like sRGB, CIE XYZ or Adobe RGB.

But if opengl is “color space agnostic” I find it strange that there exists an EXT_framebuffer_sRGB then, since it only takes gamma part of the sRGB spec into account and not the gamut part… perhaps it should be called EXT_framebuffer_gamma22 instead?

Just curious-- what sort of application do you have that would benefit from color managed OpenGL?

If you were able to specify the color space of the GL framebuffer, how do you envision this interacting with the rest of a color-managed system? Potentially one that has multiple displays, each with a different profile, running multiple windowed applications, each with different color spaces.

If your specified framebuffer color space is different from the active monitor(s) profile, some conversion will likely need to be done on swap. Are you willing to accept the performance impact of this? What about in a fullscreen application?

Any application that cares about accurate color would benefit.

I.e. if nothing is known about the color space of opengl, how can you display an image (say a jpeg or tiff) with some well defined color space correctly (assuming you have a calibrated monitor with a color profile).

You wouldn’t know what color space to convert the pixels in the image color space to.

if we assume that opengl doesnt manage colors at all then you can look up the monitor profile on the machine and do the conversion in a fragmentshader, that is , render everything to a FBO and in the final copy to screen do the math per pixel.

One problem could be vista, where we more or or less always render to an fbo and let the windowsystem copy that to the screen, does vista colormanage that copy? could be possible since vista has a more complete colormanager in the core.

Color management will happen when image is displayed on the screen, in Vista and in XP too. Basically RGB is treated as it should be, where 0 is nothing on the color channel and 1 is maximal intensity. The interpretation of this values are left to the hardware (graphics device/monitor). As I understand sRGB, it is just a RGB variant that should ensure the best precision in respect to existing monitors. But if I am not mistaken, the operation system should care of best RGB to monitor mapping, as it has all the information about monitor properties. How is it done in professional software like photoshop?

In photoshop you can setup a “working color profile”, which defaults to the color profile of the image you load or srgb if no color profile is specified.

On windows you can call SetICMProfile(HDC, profile) to set the color profile for a particular device context. I’ve tried doing that for the device context for the the window in my opengl app. It’s doesn’t produce any errors, ie. the call succeeds, but if it only affect GDI and not opengl I don’t know?

How do you know it doesn’t affect OpenGL? Did you run some tests? Is there a visual difference when you set different profiles?

I dont know if it affects opengl, since I haven’t dont any tests yet :slight_smile:

Btw, SetICMProfile appears to be for setting output profiles, which is for hardcopy devices?
SetColorSpace OTOH is for input devices, so it might the better suited.

I’ll try a simple test with an image in adobe rgb and compare it to photoshop…

No luck…

Here’s what I did: I loaded a photo in adobe rgb into photoshop, duplicated it, and assigned the srgb profile. The copy looks more dull in the colors, since it has the wrong profile.

In my opengl app, I setup the device context to use AdobeRGB1998.icc, load the image and display it with glDrawPixels. The result look exactly like the dull srgb copy in photoshop…

So I guess opengl ignores color management…

I use to think that gamma correction takes place when you swap the back buffer to the front buffer but perhaps I’m wrong about it.

I have done what you are looking for but the current state of the code is pretty kludgey. I did this for some OpenGL based Photoshop plugins that I wrote for myself.

I started with the ICM source code from ( You could also use LittleCMS or some other ICM library. ) These libraries take an input and output profile and then create the color transformations needed. In the source I inserted callbacks to provide my plugin with the transformations. ( Usually a LUT/Matrix pair - but not necessarily ) A function that can be called from a fragment program is then generated based upon these transformations. The matrix transformations are straight forward and LUTS are implemented as textures. ( LUTS have to be implemented as textures rather than static arrays in the fragment program due to indexing limitations - at least on my hardware. A real pain - see below )

Does it work? For the most part. There are some numerical issues that can show up as visual artifacts. Gamma ramps were the biggest offender. Since LUTS can’t be float arrays within the fragment prgram and FP textures aren’t necessarily supported I use 16 bit/channel textures to implement LUTS. It is this lack of precesion that can result in artifacts - usually banding especially in the shadow areas. So, for the example of gamma ramps, the ‘pow’ function is used directly in the fragment program rather than a LUT/texture.

I’ve tried various input profiles retrieved from Photoshop. Ouput profiles have been restricted to sRGB and the profiles created by a Spyder. All have given me results that I find “acceptable”.

If the ICM profile uses parametric curves - well that isn’t implemented…

I have no problem giving out my source. It is mostly a time/process issue for me. If you’re doing this for a commercial application you’re probably better off doing it yourself and making something a bit more - ah - robust. We could take it offline and I can give you some more detailed pointers.