I really do not know what the big point of the core-profile is. If an attribute is specifed with vertexAttrib or texCoord and thus named myVertexAttrib or gl_MultiTexCoords in the shader - what is the big difference?
Well, let’s ignore the obvious points of having a variable name that actually describes the contents of its data, instead of pretending
gl_MultiTexCoord3 really means “matrix bone weights”. Let’s look at purely practical issues: things you cannot do with compatibility attributes.
Non-generic vertex attributes cannot:
1: Be integer or double-precision.
gl_Vertex is, and always will be, a
vec4. Not an
2: Use more resources than specified. There are exactly and only 8 texture coordinates, one position, two colors, one normal, and a single floating-point fog coordinate. If you happen to need more per-vertex attributes than that, tough. Even if your hardware could provide more of them, you can’t use them.
3: Use the new split-format syntax.
4: Use instanced arrays.
And this is just for attributes.
The “big point of the core-profile” is to define a reasonable API that doesn’t include superfluous cruft. Like
gl_Vertex. Generic vertex attributes are the more flexible mechanism, so they are the only mechanism. Shaders are the more flexible mechanism for doing various processing, so they are the only mechanism. And so forth.
The only fixed-function stuff left are things that need to be due to being connected to fixed processing stages.
gl_Position as an output from the vertex processing is there because there needs to be some way for a shader to say, “This is the clip-space position of the vertex, go use that in the primitive.”
gl_FragDepth is there as a fragment shader output because it has very specific meaning for the depth test. And so forth.
Core OpenGL is a cleaner, more focused API that doesn’t have as much redundancy in it. Though thanks to that split-format syntax, we’re getting new redundancy…
This is why I would call
[quote]It was never the intent to be able to do so with no changes, only relatively few.
that a poor design decision. [/quote]
Well, to me, the compatibility profile itself is a poor design and should never have been brought back in GL 3.2. So it’s a matter of personal taste.
But as I pointed out earlier, what you wanted to write would never have compiled on most hardware of the day when this system was devised. It simply would not have been possible for hardware to run such things. And nowadays, most people don’t need the GLSL-to-fixed-function interop code. So even now when you could write it, you wouldn’t.
The ARB is not going to create a new feature solely for compatibility OpenGL. While they do keep it up-to-date to some extent, many features (like split-format syntax) don’t get back-ported to the old stuff (one of the only useful side-effects of deprecation&removal is that they don’t add functions solely for compatibility GL anymore). And unless a new feature brings something to core OpenGL, it’s highly unlikely they’ll bother with it.