Hey everyone!
As part of my diploma thesis in computational visualistics, I’m currently working on a library which is supposed to allow the use of pre-defined and/or runtime assigned materials for polygonal(as of yet) geometry.
Inspired by visual editing of shading properties a la Maya or Blender, I’m trying to generalize the idea of combining a multitude of different shaders into a certain material which can then be used when rendering. It supports a multitude of readily available stock shaders, similar to the current SuperBible’s, and is supposed to be extendable with custom shaders.
So, you would define a material with one or more passes, assign each pass a shader, define which resources to bind to the shader and let the whole thing fly. This has, of course, been done before - OGRE is just one example. My implementation, however, is geared towards editing the properties of a material from within an application, be it a scene editor or shader IDE etc., and then have the library check which shaders can actually be combined into a material. This involves checking if it is even possible to use a certain material and the possible order shaders can be executed in (based on the state from the previous pass in the same material, state from previous passes in other materials or other custom render passes and resources available in the resource cache at the time of definition). To give the application a hint how to treat and interpret materials, shader scripts export a certain feature set, defining the INs/OUTs of every shader stage, e.g. the predefined ModelViewMatrix or a 2D sampler etc. That’s a fair amount of book keeping but guarantees a lot of certainty when defining the contents of a scene and the way they’re to be rendered. State sorting in this case consists of identifying objects with the closest matching or equal materials to minimize state changes. So far so good.
Here’s my sort of problem: Say we have a bunch of objects with a simple, single-pass material which could easily be rendered to the default FB or some FBO altogether. Now suppose some objects with a multi-pass material, with some using 2 passes and some using 3 or more passes.
Now my idea is to render all objects which are grouped into a separate buffer and finally combine all render-targets into the final image. Assuming that for most scenes, most objects will share a lot of shading properties or be shaded equally (i.e. the same material - of course different textures, relative light positions etc. will be involved), that does not seem to be too much of an overhead. But if I have a lot of differently, multi-pass shaded objects which can only be grouped to a certain extent, that would leave me with a lot of FBOs to bind, render to and combine for every frame.
Another problem is, that when I group objects by material and render them into a separate buffer and then try to combine them, objects from buffer 0 may very well obscure objects from buffer 1, although it should be the other way around.
Is there a way of combining FBs while using depth comparison? Does anyone have suggestions or experience with this kind of approach described?
Thank you all for your time!
All the best
Thomas