Thanks for your reply! First of all to give a brief intro to the context of my question, I am developing a multi-view lightfield renderer that generates hundreds of views within a view volume. Hence we are trying to make some basic optimizations so the renderer doesnt do brute force rendering of all objects and not make all of them go through GPU cycle unless a view/transform happens.
[QUOTE=Dark Photon;1289104]Is your background object expensive to render? Have you profiled how much time it takes it to render? If it’s not that expensive, perhaps you don’t need to do something special for it. That’s the best case. I would first make very sure that there is something real here to gain, rather than just add needless code complexity to your application.
On the other hand, if (let’s just say) the background is very expensive to render for some reason, you could consider the pre-render-background-once and then use it to seed your framebuffer each frame before rendering the foreground. [/QUOTE]
I haven’t profiled it yet in complete detail, but I can see the frame rate drop when using the similar models with varying geometric complexity! Hence I believe there would be atleast something to gain as we are rendering hundreds of view just to generate one Light field display update.
A question: Can the foreground objects interpenetrate or render behind the background object? If so, then you may need to save off not only the color buffer for your background object but also its depth buffer too. Depending on what you need here, there are a number of imposter techniques which make use of depth that you could consider. [/QUOTE]
Yes it may or may not. So I want to have that as an option to discard depth or to have it. But that is basically a sophistication as of now, so I might just discard depth if I get significant mileage doing so.
Another question: Do you need the background/foreground color buffer result to be multisampled (e.g. for AA or multisample alpha reasons)? You want to carefully consider the answer to this question with the last to determine what capabilities you’re going to require from your OpenGL/OpenGL ES implementation. [/QUOTE]
I would like it to test the results that will be produced before multisampling and make a decision based off of that. I may not need this signifcantly atleast to start with and test.
I found/read an interesting option somewhere to write the background buffer into the final image. By rendering it onto a static orthographic projected quad and then superimpose foreground objects with proper perspective and view transformations. Just wondering if that would save or cost me more, compared to a direct copy of background buffer into the final buffer.