according to Randi Rost’s book (OpenGL Shading Language) we should make use of the modular nature of the language…
is this as simple as adding a second fragment shader to the program to create a “second step”?
surely this defies logic - there would be 2 main() functions!!!
From what I understood from the GLSL spec, you can have several fragment shader objects in use together under one program container.
Only one, and indeed, exactly one, of any group of fragment shader objects used like this should have a main() function.
Quoting from memory here, so I’m not sure how the seperate fragment programs communicate, perhaps the linking stage takes care of it, perhaps you have to use variables…
If you haven’t already got the GLSL spec, go here and get it…
Also, the relevant extensions specs are
and you might also want
these being available here
It’s similar to linking with normal CPU programs. You attach more than one shader object to a program object, but only one of them contains a main() function. This shader can then call functions declared in the other shaders, and global variables that have the same name are shared.
This thread on the forum answers your question
Basically, it seems you need to prototype the names of any functions in shader B, that you’re going to use in shader A, and then the (linker?) goes and finds them…
Having mentioned the GLSL and extensions specs above, I subsequently went and looked at them, and was unable to find anything that provided an answer.
Is that just me not looking hard enough, or do they not actually say that anywhere?
i was really wondering whether this allosws multiple passes…
i need access to 29 texture units from my frag shader (don’t ask - it’s a final year uni project) and the max per pass is only 16. (NV40)
if this isn’t going to work, can anyone recommend an alternative…?
I don’t have the book, but one thing which allows programs to be assembled from small source code modules are the glShaderSource parameters. You can give an array of string pointers which combined give the single shader.
The other thing is that you can attach an arbitrary number of shaders to build one program object. Of course only one main() entry function must be present per vertex and per fragment part.
If these are 2D textures, one way to increase the number of accessible 2D textures is to use cubemaps. You can put 6 texture images in one unit and addressing is done via the lookup vector.
Another way would be to put the textures into a 3D texture and lookup individual slices.
The third way is multipassing.
the exact method of the afore mentioned “multipassing” is the thing i’m actually searching for…
i just can’t seem to find any clues on the web.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.