Blending in fragmen shader and efficiency

The standard way to render translucent geometry is by using the fixed function pipeline trough GL_BLEND. Another way is to render in a texture (using FBOs) and from inside the shader read that texture and perform a custom blending calculation. For example:

Firstly we render all the non translucent geometry in a texture (our framebuffer attachable image aka color_buffer_fai). Now its time to render a piece of glass. We bind our color_buffer_fai and pass it as uniform inside the glass’s shader. We render in the same FBO. The glass fragment shader may look like this:


uniform sampler2D color_buffer_fai;
...
vec2 tex_coords = gl_FragCoord.xy / vec2( screen_w, screen_h );
vec4 source_color = texture2D( color_buffer_fai, tex_coords );
gl_FragColor = source_color / 2.0;   // write back to color_buffer_fai

My question is whether this way of performing blending is efficient or not. It definitely gives you more freedom over blending but it complicates things and its efficiency is uncertain. What do you think?

Obviously, the GL_BLEND, being natively implemented in HW, is faster, simpler and doesn’t require an additional buffer.

GL_BLEND is not a deprecated FFP of OpenGL 3.2-core. So trying to blend without it is basically the same as trying to simulate mipmapping & trilinear filtering though user-side textures & shades.

The answer is: use GL_BLEND whenever it provides functionality you need. Otherwise, you have no chose other then simulate it in a shader.

Edit: Worth noticing that writing to the same texture you are currently reading is an undefined behavior, so don’t expect it to work even if it works :slight_smile: