Colored alpha or blending shaders

One of the most useful blending functions is
(GL_ONE, GL_ONE_MINUS_SRC_ALPHA). This allows to do all the blending “effects” that we need (color filter, additive,…) in one pass. Unfortunately, there is one thing missing for this to be really true: colored alpha. If alpha (actually opacity) would be RGB, it would define how the fragment should filter the color seen through it. This is actually the only blending function available in RenderMan since no other is needed. To have the same behavior as the common (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), you just need to multiply the color by the opacity at the end of your fragment shader. All we need is that a shader should output two vec3: gl_FragColor and gl_FragOpacity and everything should be fine.

Another way to do it is with the ATI_draw_buffers extension. With this extension, a fragment shader can output more than just a vec4 (through gl_FragData[]). If blending shaders would exist, we could combine two vec3 from two draw buffers to have the same behavior.

One or the other way would be great.

To have the same behavior as the common (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), you just need to multiply the color by the opacity at the end of your fragment shader
That’s called premultiplied alpha.
I think there is something wrong with it because I think it doesn’t quite replace (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
If I tell you

What is alpha?
You need it for the destination blending.

Use gl_FragColor.a

What I mean by alpha is opacity. And instead of having just one value (currently called alpha), opacity should have 3 components: one for scaling red, one for green and one for blue. Opacity erpresents how the surface filters color seen through it.

For now, when using (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA), if the source color is (SCr, SCg, SCb), the source alpha is Sa, the destination color is (DCr, DCg, DCb) and the destination alpha is Da, we have:

final color = (SCr * Sa + DCr * (1 - Sa), SCg * Sa + DCg * (1 - Sa), SCb * Sa + DCb * (1 - Sa))
final alpha = Sa * Sa + Da * (1 - Sa)

Now, with an RGB source opacity (SOr, SOg, SOb) and the blending equation (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) (which could be called (GL_ONE, GL_ONE_MINUS_SRC_OPACITY)), we would have:

final color = (SCr + DCr * (1 - SOr), SCg + DCg * (1 - SOg), SCb + DCb * (1 - SOb))

and, if opacity information in needed in the final framebuffer image (with (DOr, DOg, DOb) being the destination opacity):
final opacity = (SOr + DOr * (1 - SOr), SOg + DOg * (1 - SOg), SOb + DOb * (1 - SOb))

It is easy to see that if the same behavior than the first computation is needed, (SCr, SCg, SCgb) should just be replaced by (SCr * Sa, SCg * Sa, SCb * Sa) (which can be computed in the fragment shader), (SOr, SOg, SOb) by (Sa, Sa, Sa) and (DOr, DOg, DOb) by (Da, Da, Da).

If a monochrome alpha is needed in the destination image (e.g. for generating a PNG with an alpha channel), it can easily be computed like this (with (Or, Og, Ob) being the final opacity computed above):
final alpha = 0.3 * Or + 0.59 * Og + 0.11 * Ob
which is the formula to compute luminance from RGB.

Actually, what we currently call alpha is just a particular case where Or = Og = Ob. Using (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) makes the source color independent of the opacity (but it can be taken into account if we want it) and having an RGB opacity allows each color component to be scaled differently. It just leaves us the choice to do whatever we want, with one easy general case.

Some examples:

Green glass with an HDR envmap for reflection:

vec3 envmapColor; // color read in the HDR envmap
vec3 reflectionColor; // RGB reflection ratio of our material (just like specular color)
gl_FragColor = envmapColor * reflectionColor;
gl_FragOpacity = vec3(1, 0, 1); // green filter

Classical non-premultiplied RGBA texture:

vec3 materialColor; // color of material with lighting computed
vec4 textureColor;
gl_FragColor = materialColor * textureColor.rgb * textureColor.a;
gl_FragOpacity = vec3(textureColor.a);

It’s nice to see it in math form. You are saying you need RGB and an extra 3 for opacity per channel.

Like you said, 2 things are needed:

  • gl_FragData (vec4)
  • gl_FragOpacity or whatever you want to call it (vec4). gl_FragData[1] is good enough

The fragment stage sends these 2 to the blending stage. If programmable blending is available, you write your shader that way.
The output would typically be vec4 per color buffer.

The blending stage would need access to 2 “Draw Buffers” to do the job.

In essence, you are asking for a fully capable blending shader.

I’m proposing 2 ways. Either blending shaders to be able to combine any fragment shader output in any way, or just the current configurable blending, but with opacity colored and separated from color. I think the blending shader way should be the most modular and general-purpose, but it would maybe require hardware changes.

Actually, the 2 output values should be vec3, not vec4, since both are just RGB colors and thus the 4th component is useless for us. Of course, we could just ignore the 4th component.