glslang constant name

const int gl_MaxLights = 8; // GL 1.0
const int gl_MaxClipPlanes = 6; // GL 1.0
const int gl_MaxTextureUnits = 2; // GL 1.2
const int gl_MaxTextureCoordsARB = 2; // ARB_fragment_program
const int gl_MaxVertexAttributesGL2 = 16; // GL2_vertex_shader
const int gl_MaxVertexUniformFloatsGL2 = 512; // GL2_vertex_shader
const int gl_MaxVaryingFloatsGL2 = 32; // GL2_vertex_shader
const int gl_MaxVertexTextureUnitsGL2 = 1; // GL2_vertex_shader
const int gl_MaxFragmentTextureUnitsGL2 = 2; // GL2_fragment_shader
const int gl_MaxFragmentUniformFloatsGL2 = 64; // GL2_fragment_shader

Not really an issue, but I think that these extension suffixes aren’t really clean.

If those constants are part of the shader language specs, there is no point in leaving the extension suffix. Especially if the language gets integrated within OpenGL 2.0 specs, then what’s the point to “ARB” and “GL2”? Of course it’s ARB and GL2 'cause it’s in OpenGL 2.0.

And even if the language is used by an other API or/and application, the latter should still respect the standard constants.
And if they want to add their own constants, only then it would make sense to use a suffix for their own extensions.

Thus I can see the suffix usefulness with future (non standard) extensions to the language; but not for the standard version itself.

These suffixes are error prone, I can already see shaders not compiling simply because one forgot to put an “ARB” or a “GL2” here and there.

Plus they are the sign of an extensions mess legacy OpenGL 2.0 is trying to get rid of.

The suffixes exist because that’s how the ARB requires extensions to mark their #defines (and they are #defines, not "const int"s). Extensions are required to use the suffix for a very good reason: OpenGL is bound to C, and C doesn’t support namespaces. Name conflicts would be possible.

For example, let’s say that NV_vertex_program exposes GL_MAX_VERTEX_ATTRIBUTES. Now, when ARB_vertex_program comes along, it wants to expose this too. Unfortuantely, NV_vertex_program already defines it. So, they would either need to use the nVidia enum, or make up a new one. The post-fix makes it so that name conflicts never happen; you can always tell which extension exposes what functionality and enums/constants/#defines.

OpenGL 2.0 is going to have extensions too. It has to; the only reason OpenGL 1.x has survived as a viable development platform is because of extensions (and I don’t mean ARB-extensions). The ability to expose new functionality on implementations that support it is very important to making OpenGL useful.

I think we are not talking about the same thing.

I’m talking about the OpenGL Shader Language spec (the upper quote was taken from it); not about the OpenGL API.