I was wondering what would be the best way to implement a decent quality, real-time depth-of-field effect in OpenGL in such a way that it does not require the support of pixel shaders (ARB fragment program).
At first I thought about using the accumulation buffer to render multiple passes of a scene from a number of jittered camera eye positions and a common camera target position (the point of focus), but the accumulation buffer seems too slow for this task.
I am now thinking about using multiple render-to-texture passes (pbuffers) and blending these together to see if this will achieve decent quality and real-time performance, but there might very well be another way that I’m not aware of.
Does anyone know of a way to achieve a decent quality, real-time depth-of-field effect in OpenGL without using pixel shaders? Any help would be greatly appreciated. Thanks in advance.