GLSL validation

What’s the best way to ensure that GLSL code compiles on most implementations? I’m developing mainly on Nvidia hardware, of which the driver isn’t strict enough for my taste. I’d like to validate the code before putting out new builds to our testers, to make sure everything compiles properly on non NV hardware. It is too time consuming to wait for tester feedback for just a few syntactical/grammatical errors.

So, I have been using the 3dlabs’ glslang.dll to validate code, but it has a few bugs (nested #ifdefs) and is becoming outdated (dated Sept 20, 2005). Is anyone aware of more recent code? And how about bringing this up to date and including it in the GL SDK?

How do other GL developers here test their shaders for compatibility (other than testing it on lots and lots of different hardware setups)?

I only can you tell that you should take care, that you have a alternative for older hardware, that have not the extensions/functions you use
(for example sampler2DRect (1.40))
:frowning:

Yeah, by default it’s pretty lax, and permits Cg-isms.

You can get much stricter GLSL syntax by using:

#extension ###

where ### >= 110 (e.g. 120). See the NVidia GLSL release notes for more details.

I have often thought that someone “better than me at compilers” should write a GLSL pre-processor - Basically run the pre-processor (#defines etc and possibly #includes for files) and output validated GLSL.

As a bonus, it could do dead code elimination and constant folding. (so stupid compiler errors do not stop code from running)

I know the Cg compiler can kind-of do this, (take GLSL code and output GLSL code) but I think it only targets GLSL1.0 as a destination? (it also seems to reformat the code into an assembly like format)

Yeah, by default it’s pretty lax, and permits Cg-isms.

You can get much stricter GLSL syntax by using:

#extension ###

where ### >= 110 (e.g. 120). See the NVidia GLSL release notes for more details.

[/QUOTE]

did you mean,

#version ### ?
#extension will ensure that an extension is supported to compile the shader.

The 3dlabs OpenGLCompiler supposedly does all of that. You can pass “EShOptNone”, “EShOptSimple” or “EShOptFull” to ShCompile(). I use “EShOptNone” for quick validation.

Using the #version directives may work around the problem on NV, but I prefer to have some validation that is independent of hardware/driver.

Agreed, and for the same reason (compensating for underfunctional GLSL compilers). Not only that, would make it easier to cache this intermediate “dead code eliminated” form for subsequent runs, regardless of vendor driver, so as not to waste valuable “tens of seconds” (no, I only wish I was kidding) where the compiler goes off, parses, DAG analyzes, and throws away much of the shader due to dead code elimination for a bunch of materials.

This would be much better than the alternative, which is turning your shader into an #ifdef/#endif nightmare, trying to prevent the compiler from wasting your user’s precious time doing busywork (or building careful sprintf logic outside the compiler to build dead-code-eliminated shaders – ugly).

But then again, we’re back to the wish for precompiled shaders in GLSL that can be cached on-disk…, which is another thing this’d be useful for in their absense.

I know the Cg compiler can kind-of do this, (take GLSL code and output GLSL code) but I think it only targets GLSL1.0 as a destination? (it also seems to reformat the code into an assembly like format)

I’ve wished for the same, targetting > GLSL 1.00, and preserving some semblance of the original variable names, rather than producing this kind of (hardly traceable) output:

...
void main()
{
    _ZZ3SrZh0133 = _ZZ3SZaTMP243.x*_ZZ2Sgl_ModelViewMatrix[0];
    _ZZ3SrZh0133 = _ZZ3SrZh0133 + _ZZ3SZaTMP243.y*_ZZ2Sgl_ModelViewMatrix[1];
    _ZZ3SrZh0133 = _ZZ3SrZh0133 + _ZZ3SZaTMP243.z*_ZZ2Sgl_ModelViewMatrix[2];
    _ZZ3SrZh0133 = _ZZ3SrZh0133 + _ZZ3SZaTMP243.w*_ZZ2Sgl_ModelViewMatrix[3];
    _ZZ3SrZh0135 = _ZZ3SZaTMP244.x*_ZZ2Sgl_NormalMatrix[0];
...

It’s occasionally useful though, but could be much more still with meaningful names.

However, with just the former, cgc could be an effective prefilter/precompiler for feeding all vendor’s drivers (though this “kludge” wouldn’t be in the best interests of OpenGL; better for GL to change the shader model to aid vendor stability and support precompiled disk-persistent shaders).

Faced with GLSL compiler quality issues from some vendors, I’d think either:

  1. [li]an ARB-standard and shared compiler (produces abstract parse DAG and does dead code elimination), and/or[*]a user-space compiler->assembly + driver-space assembly->machine code model

Right now we have all vendors going off to develop their own supposed-identical high-level language parser and optimizer for the exact same specs, and then wonder star-struck why they don’t work exactly the same in the end (wow… you don’t say.).

did you mean,

#version ### ?[/QUOTE]
Yeah, my bad. Thanks for the correction.

NVemulate has this “Generate Shader Portability Errors” switch… (?)

CatDog