The standard way to render translucent geometry is by using the fixed function pipeline trough GL_BLEND. Another way is to render in a texture (using FBOs) and from inside the shader read that texture and perform a custom blending calculation. For example:
Firstly we render all the non translucent geometry in a texture (our framebuffer attachable image aka color_buffer_fai). Now its time to render a piece of glass. We bind our color_buffer_fai and pass it as uniform inside the glass’s shader. We render in the same FBO. The glass fragment shader may look like this:
uniform sampler2D color_buffer_fai; ... vec2 tex_coords = gl_FragCoord.xy / vec2( screen_w, screen_h ); vec4 source_color = texture2D( color_buffer_fai, tex_coords ); gl_FragColor = source_color / 2.0; // write back to color_buffer_fai
My question is whether this way of performing blending is efficient or not. It definitely gives you more freedom over blending but it complicates things and its efficiency is uncertain. What do you think?