GLSL and 3d engine

I’m in the process of building a 3d engine with OpenGL 3.2 & GLSL. I’m not aiming for best possible speed but for some flexibility. My question is, how do you manage it all? For me it seems that there will be “many” shaders. Let’s say I want to support three lightsources, and these can be of different types. Each type will have different settings(for example point light versus spotlight). Without conditionals in shaders and for example a generic light structure holdning all parameters for all type of lights there will be many shaders just to deal with the various combinations of lights.
And the complexity increases in situations where there may be textures or just a color for diffuse, bumpmap or no bumpmap, different lighting modells etc.

When I do the math and start counting the amount of individuals shaders needed to meet every scenario it just seems like there will be a lot of work just manage it all. At least conditionals in the shader will keep the amount of shaders down but having them on a per fragment basis must really hurt performance.

Any tips on dealing with this kind of situation?

I’m just a newbie and I was asking myself the same thing. I found this old post

http://www.gamedev.net/community/forums/topic.asp?topic_id=499863

Basically, they suggest to use preprocessor directives to enable or disable portions of code. I don’t know if this is what you mean by “using conditionals”. Anyway, just a pointer.

Regards,
Rob C

Yeah, been down that road. The separate shader “source code” for different permutations is a disgusting maintainability headache and pretty future-limiting, especially the auto-shader-generation via shading graph thing. You my guess is you probably don’t want to go there. So: unified source code for your most if not all of your shaders.

So then what are the options for introducing shader variability?

a) COMPILER CONDITIONALS: Well the first is to use “if” conditionals based on const identiers or #ifdefs based on preprocessor #define identifiers (same diff, except the former reads “tons” better and therefore is easier to maintain). Using different const or #define identifier definitions, you can compile the individual shaders you need based on your “shader permutation inputs” (e.g. num lights, light type per light, etc.). These conditional expressions used for ifs, for loops, etc. just compile right out of the shader leaving you with straight-through sequential code, and the GPU doesn’t see any conditionals/branches in the shader assembly. Then you just form some way to associate a “GL state change group” in your app (which identifies a “shader permutation”) with a shader, and your pretty well there.

b) RUN-TIME CONDITIONALS: Another option (slight variation on the previous) is to put your conditions in the shader as "real if"s. You do that by making the if conditions identifiers that the compiler doesn’t know at compile time. E.g. uniforms. Then the GLSL compiler sees that the "if"s have to stay – it can’t optimize and compile them out. You have less shader switching this way, but you have the branch overhead for each “if” in the shader. General rule I’ve heard is coherent branches aren’t too expensive, but non-coherent branches can be. You’ll have to try it on your target platforms and see.

c) Mix a) and b). And of course you can mix up the last two to-taste. For instance have “shader permutation input A” be a dynamic if (value passed in by a uniform), “shader permutation input B” be a const (e.g. hard-coded into the shader), etc.

Examples:

a)

  const int  FOG_MODE  = FOGMODE_VTX_EXP2;
   ...
   if ( FOG_MODE == FOGMODE_VTX_EXP2 )
      ...

b)

  uniform int FOG_MODE;
   ...
   if ( FOG_MODE == FOGMODE_VTX_EXP2 )
     ...

Note: there are also shader subroutines, but I’ve not bumped up to that GLSL version and so haven’t played with those yet – just browsed a source example. Hopefully someone else with more experience with those will chime in with their thoughts.

There is d)
implement “subroutines” in separate shader objects.
e.g.
main shader:


vec4 get_diffuse();
void main(){
 gl_FragColor = get_diffuse();
}

subroutine shader:


uniform vec4 mat_diffuse;
vec4 get_diffuse(){
 return mat_diffuse;
}

When linking a shader of a particular material, attach required subroutines that implement current materials data.

There is also e) : the subroutines of OpenGL 4.x but you need OpenGL 4.x …

Thanks for all the answers =)

The d) approach (functions) seems interesting, although the question is whether this approach should be treated as shader permutations with every combination of shader and functions setup as an individual program, or if it would be feasible to link at runtime/render time to a particular program. As I can see one can detach shaders but not “unlink” the program. I guess one has to “unuse” the program before detaching and attaching shaders.

Otherwise the c) approach of mixing conditional types seems to yield more manageable code than relaying on compiler conditionals only. And ofcourse if every shader is treated as a permutation the compiler conditional could be used to active different function definitions and thus having everything in the same source.

The e) options is out of my reach for since I don’t have hardware support for OpenGL 4.x right now.

I’ve been testing some basic functions defined in a separate shader and it seems as one doesn’t have to remove the program from the pipeline to detach the “function shader”. Just detaching, attaching and linking works. However I guess there might be pitfalls doing this during rendering? Will detaching and linking be able to interupt rendering that is occuring in the hardware? Perhaps some sort of “program permutation management” is better than doing this run-time?

Another question: Does type of shader matter when your using it to hold functions? In my simple tests I assumed that if the functions should be used with the fragment shader, the shader holding the functions should also be a fragment shader. This would have an impact on the system design since a “function shader” potentially could have functions that should be used in both the vertex and fragment shader.

  1. don’t relink, just store a separate program, because you’ll need the old one later

  2. each vertex type should have it’s own functions, which are visible only from this type shaders. This limitation is not properly supported by drivers yet, despite presenting in the spec.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.