#include in glsl

Just imagine you want to do something like:

I see what you’re trying to do. But have you considered that, while #include can solve these problem, it may not be the best solution to do so? You’re trying to solve a lot of different problems with these macros. Maybe there’s a better way.

Personally, I’ve found that defining my own shading language is a more reasonable solution. I can control the syntax (having auto and template-like functionality is cool) and I have the tools to abstract away extensions and so forth. It was a lot of initial work, but it seems to have worked out well so far.

In any case, there’s one problem left to deal with in terms of implementing #include: how do you do it? Do you make it a callback, or do you supply a number of shaders that you give string names to? How does that deal with the possibility of threading shader creation? How does it handle GLX stuff where the server is running on a different machine?

Personally, I’ve found that defining my own shading language is a more reasonable solution.

I’m a graphics guy, not a compiler guy. To me this solution sounds like overkill and might introduce more problems than it solves. Plus, how would I implement dependencies on #defines that only the real GLSL preprocessor knows? And it still leaves me with the question, how do I solve the #include problem then :wink:

Both, Cg and HLSL support includes.
Cg seemingly supports callbacks and “pre-uploaded files/source strings”. D3D is using callbacks.

Yes, from OpenGL’s perspective, callbacks are scary (we have none up to day). The advantage of callbacks is that you only need to load/upload files that are actually referenced.
The mechanism Cg calls “virtual file system” requires that you upload each and every file that might be needed (you don’t know unless you start parsing GLSL, but I really want to avoid that) in advance. But in the end… there are not so many anyway, say less than 100, with a few KB each. This is done is splitseconds.

The question for threaded compilation is a different topic. I’m against introducing “truely” asynchronous compilation. I.e. start compilation, set a sync object, render some things, come again a few seconds later, check sync object for completion and see if it worked. This is not how graphics programs work. They either need a shader or not.
I guess, by asynchronous compilation you want to fight the true problem: slow compilation. So why not solve this problem in a better way, by introducing precompiled binary blobs?
And if you still wanted asynchronous compilation, you can do that already by doing the compilation in a second thread using an auxiliary (list-sharing) context. Its just a matter of how clever the driver writers are that it really gains you performance.

Plus, how would I implement dependencies on #defines that only the real GLSL preprocessor knows?

Why would you need to? All of the GLSL #defines are based on things you can query about the implementation (extensions, version number, etc) from the OpenGL API. So the system is well aware of what features are available and what are not.

And it still leaves me with the question, how do I solve the #include problem then

However you want. The point of making a language is to be able to what you want. So if you want to define a functional programming-based shader language, you’re free to do so. If you want to define importing the way that C# or Java does, again, you can do so.

The question for threaded compilation is a different topic.

No, it isn’t a different topic. Right now, the implementation has the freedom to compile shaders in another thread. This is a good thing. Once you have callbacks, you need to make sure that the callback happens in the thread that called it, and that it needs to happen synchronously with the calling of CompileShader or LinkProgram, as needed.

I do not want to see that freedom taken away from implementations.

I do not prefer callbacks at all cost. I can live with preloaded source strings as well.

Right now, the implementation has the freedom to compile shaders in another thread

Right now glCompileShader() and glLinkProgram() are not asynchronous anyway… After some time they return and tell me if it worked or not. So how could it benefit from multithreading? Using multiple threads to compile a single shader? Is that really done in todays compilers?

Right now glCompileShader() and glLinkProgram() are not asynchronous anyway… After some time they return and tell me if it worked or not.

No, they don’t. They only have to tell you when you ask them. That is, when you call glGetShaderiv/glGetProgramiv. Until then, the actual work can be done on another thread.

Admittedly, few are the programs that actually wait very long after starting the compile/link to check to see if it completed. But the freedom for the implementation is there.

@Godlike

Please do clean and release your stuff. It’s very useful, could help many people.

Great. Im working on it

I suppose. But I don’t really understand how it could have been updated. It is essentially just a library that makes OpenGL calls. Who would be responsible for distributing it?

Just as Khronos makes a GL standard, an EGL standard and there is a GLX standard (not clear if that is maintained by Khronos or not), it is feasible to generate a GLU standard for GL3 or higher. As for who makes it and who distributes it these are the beans as of now:

For most Linux distributions, AFAIK, GLU is from Mesa. [quite likely that this is the case for almost all open source OS deals as well, BSD, etc]

For MS-Windows, AFAIK, GLU is some aging GLU from Microsoft.

Not too sure on Mac, most likely, but I am not too sure, Apple makes and maintains GLU on Mac.

At any rate, GLU is worth making a standard for and
updating. What remains is an implementation, which I would bet that Mesa at the very least would be right on top of quite quickly. [Witness WebGL, another standard from Khronos, being supported in WebKit].

You can use GLU from Mesa even on Windows. AFAIK it’s distributed under the MIT licence which is very closed-source friendly.