version acrobatics

since VERSION is often set to 120 by default, even if a higher version is available (why so, btw?)? I check for highest shader version available on the CPU and prepend the maximum available GLSL version, say,


#define GLSL_VERSION 150

to all shader code I load. I then do something like,


#if (GLSL_VERSION > __VERSION__)
// here I set the maximum supported GLSL version
#version 150
#endif // GLSL_VERSION

#if (__VERSION__ > 120)
# define IN in
# define OUT out
#else
# define IN attribute
# define OUT varying
#endif // __VERSION __

in my shader. I also check for VERSION in my shader code. This way I can support all possible GLSL shader versions in a certain interval in the same shader code.

My question is, does there exist a better way to do the same task, i.e. support an interval of GLSL versions in the same shader code?

Hmm, no answers? Well, as it turns out, the trick does not work on ATI cards but works very well on NVIDIA cards. So now I simply get the maximum version the GPU supports and inject it on top of every shader source I load.

My question is, does there exist a better way to do the same task, i.e. support an interval of GLSL versions in the same shader code?

You could just use the compatibility version for GL versions 3.0 or greater. In GLSL, you would set “#version 150 compatibility”. It will give you deprecated warnings, but if you’re looking for backwards compatibility with GL 2.1, then that’s what you use.

“The #version directive must occur in a shader before anything else, except for comments and white space.”, so your shader may cause error. -
#define GLSL_VERSION 150
#if (GLSL_VERSION > VERSION)
#version 150
#endif

Yeah frenk, I fixed this. Now I check the maximum supported version and do:

#version 400

or sth like that, as the first line of every shader I load.