I’m currently writing a GLSL-based material system. Basically the system works by combining a bunch of different blocks of code that meet certain predefined “classes” eg. ambient effect, per-light effect, vertex transformation effect, etc. I’ve also implemented my own semantic system on top of GLSL to match inputs and outputs between blocks.
The plan is that at runtime the system will swap in and out different blocks on the fly in order to handle things like different light sources and ambient effects (eg fog, or ambient lighting in different rooms, etc)
Currently each type of block has a set function signature since I want to compile all of them at load-time into shader objects. This way I can just link together the existing objects on the fly which should theoretically be faster than recompiling everything. However, since GLSL has no function pointers or equivalent functionality (as far as I know) I still have to generate and compile a short connector file which maps the function names. This should be quick though since it would only be a few lines of simple function calls.
My question is: Is this a false optimization? It’s possible that most of the compilation time is in fact taken up by the linker or that the driver actually just appends the source together then compiles it during the “linking” phase.
Does anybody have experience implementing something similar to this? With the set function signatures I don’t see a way of pushing though things like custom data between consecutive blocks. One solution is to just pass every possible piece of data the block could need inside a struct but this seems really wasteful and doesn’t account for things like custom attributes or varyings. It seems like it would be much easier to just jam everything in one long function. This way I could just use variables for each semantic to carry data between blocks. Unfortunately this also seems to make the dynamic rebuilding very costly.
Anyway, thanks for reading my super-long post