I’d never really had an urge to use the accumulation buffer (yeah, motion blur and depth of field are neat, but having to render the whole scene 2 or more times and halve your frame rate never seemed worth it). The one time that I thought the accumulation buffer would be useful, I found out that it’s way too limited anyway.
Imagine that you are drawing surfaces with multipass effects (or perhaps you use multipass as a fallback if the HW doesn’t do multitexture). Further, suppose you have been told that the LOD algo will simply fade models out once they reach a certain distance - or that as part of a death animation models must fade out to avoid having tons of corpses around. How then, do you apply the multipass shader to this transparent object? Multipass depends on the contents of the framebuffer containing only the intermediate results of the shader you are trying to render. Trying to just do multipass, modulating each pass by the opacity gives the wrong results. You really want to render the surface with the full shader, and then composite the result into the framebuffer using the alpha of the surface. I looked carefully at the facilities of the accumulation buffer, and found that it is not up to the task.
What I need, then, is a compositing buffer which holds a single RGB or RGBA image at the same resolution and bit depth as the framebuffer (which would have to be RGBA). At any time I would like to be able to blit the framebuffer into the compositing buffer (using any of the standard blend modes - and the alpha channel of the frame buffer as src alpha). One could then render all the opaque objects in the scene and directly copy the results to the compositing buffer. Next one could clear the framebuffer and render a complex transparent surface (say a large pool of reflective water, that requires many rendering steps since it acts as a partially reflective portal-mirror) clipping it using the Z-buffer. Once the frame buffer was set up so the pool was properly shaded, and the destination alpha was appropriate for compositing, this layer could be alpha blended into the compositing buffer.
The memory demands of such a technique seem manageable, but it would demand attrocious amounts of fillrate unless the copy and blend operations were cleverly optimized to avoid copying empty regions…
I realize that the same effects could be yielded through intelligent use of glCopyTexSubImage, but it seems the direct route would be easier to optimize…
For now I can live with the errors caused by blindly using multipass - they aren’t too bad with many multipass effects, and usually aren’t noticeable when objects fade in and out quickly.
[This message has been edited by timfoleysama (edited 11-28-2000).]