Something dawned on me the other day.
I’ve always been creating multiple FBOs when I want to have multiple rendering targets. Even if I’m not using them all at once. Then I realized, that you can just modify a single FBO by changing its color attachments. This is much easier to code and seems to have a good performance even on mobile
so my question is, is there any reason to not do this. Are there any performance penalties that are associated with colored attachment swapping as opposed to multiple frame buffers? If so, what is the point of having multiple frame buffers then?
Maybe it is for your specific use case. But in general, a single glBindFramebuffer() call is simpler than multiple glFramebufferTexture*D() and/or glFramebufferRenderbuffer() calls.
In terms of performance, I would expect swapping between FBOs to “naturally” be quicker than using a single FBO and continually rebinding its attachments, although I daresay it’s possible to reduce the overhead to the point where it’s negligible compared to rendering.
Ah I see.
So it’s similar to binding individual VBOs vs a single VAO
Theoretically changing the FBO attachments would cost more then switching between FBOs. (Validation of changed FBO)
That said, switching between FBOs definitely comes with a cost! Its slower then for example using an texture array and just switching layers with a integer uniform and a simple geometry shader.
Also you could attach several color attachments onto one FBO and just switch the active slots. Modern GPUs support 8 color attachments. But depth and stencil are limited to one attachment because this is designed for multi target rendering.
Very good points. On ES 2.0 however which I’m targeting, there is only 1 color attachment allowed at a time.
The original technique I had was using a glCopy* call to copy the rendered framebuffer into the refraction texture. Which performed disastrously on ipad due to a forced pipeline flush and stall.
So instead I render the refraction to an fbo, then swap color 0 to attach my “final” color texture image and render the refraction again here as a backdrop, render the mirror/glass on top of that. Then I bind the default screen FBO which is a separate vbo entirely and copy the whole final image here using a standard textured quad approach.
This is all performs decent on latest ios hardware. I had to jump through most of those hoops because opengl es doesn’t let you write gl_FragDepth, so I had to keep the same depth buffer around through most of the rendering steps.