A blend shader should work much like a fragment shader, running immediately after the former.
The blend shader’s input is the fragment shader’s output as well as the frame buffer’s pixel value. The blend shader’s output is the value that will be written into the frame buffer.
Not even the whole huge set of GLSL functions are necessary to make this useful. Merely providing min/max, abs, and ±*/ would be big win already.
Current hardware obviously can read from the frame buffer, perform several mathematical transformation, and write back to it. Otherwise, glBlendEquation[Separate] could not work.
Blend shaders would let you remove function calls with cryptic constants and limited functionalty with one line of easily readable shader code (or more lines, if you will), which does exactly what you need.
You could, among other things:
- Use logluv encoding (or any other non-rgb colour representation) and do correct blending.
- Determine an object’s thickness (for subsurface scattering etc) in one pass.
- Run a verlet integrator without texture ping-pong.
- Run your own accumulation buffer if you need one, and never worry about hardware acceleration.
- Do shadow mapping with several semi-transparent occluders.