The high level compiler was separate from the driver. You have profiles. You get information about number of low level instructions your shader will be using. The low level shaders had glGet to know what the hw capabilities are. No built in glLight and glMaterial and ModelProjection and crap.
GLSL went straight into the driver and was a black box. It went through a lot of revisions and ended up where Cg is.
It most certainly did not end up where Cg is.
There are no “profiles” in GLSL. The closest thing you have to that is the #version directive, but that’s just a way to tell what version of the language you’re using.
GLSL always had glGets for implementation details. It still doesn’t have glGets for running out of instructions, because that’s not something you could ever correct for.
And not having access to fixed-function state would have been a detriment to the adoption of GLSL. It would have made GLSL shaders all-or-nothing propositions for users. You either use GLSL everywhere, or you use it nowhere. Even the ARB assembly programs had ways of accessing fixed-function OpenGL state.
Yes, you can look back now, 6 years after the fact, and say that now we don’t need it. But we did need it then, and it was very important then.
Getting binary blobs is very different from compiling to an intermediate language. Binary blobs are black-boxes. They are not available in any format that people are expected to be able to read. The purpose of this functionality is entirely different from compiling to an intermediate form.
An intermediate form is an interchange language. If the binary blobs were like Cg profiles, then you could take a binary blob from one implementation and compile it to another implementation. You cannot. Indeed, you can’t even be sure that your current implementation will be able to read the binary blob it gave you.
No, the ability to get program binaries has exactly and only one purpose: to potentially speed up subsequent execution of programs by preventing you from having to re-compile shaders. And note that Cg doesn’t provide this, since the results of Cg still have to be compiled. Again, you don’t need to parse the Cg sources, but you still need to parse the results. You still have to do all of the optimization work, which is the lion’s share of the effort. Get program binary avoids virtually all compilation overhead.
So no, GLSL has not ended up where Cg is. You can say GLSL “ended up where Cg is” when it compiles to an assembly language that is further compiled into an OpenGL object.
Also, this: “You get information about number of low level instructions your shader will be using,” is not even remotely true. The “low level instructions” bear absolutely no resemblance to the actual language used internally by the GPU. Once upon a time, it did (and even then, it was an approximation). But modern VLIW/SIMD-based GPUs are so fundamentally different as to be unrecognizable.
The “low level instructions” have to go through about as much compiling to fit modern hardware as GLSL. Sure, they could use a simpler text parser. But that’s about it.
3D Labs may have been incredibly self-serving by defining GLSL in terms of scalars rather than vectors, but they were right in the end. All modern GPUs use some form of VLIW/SIMD instead of purely vector opcodes.
That way even intel wouldn’t [censored] up, since they would only need to support assembly, and you can’t go too wrong there.
We’ve talked about this. The majority of driver bugs from shaders come from failing to implement the functionality correctly, not failing to implement the parser. Parsing assembly vs. parsing GLSL is easier, but they’re all converted to the same representation. It’s the process of converting that representation into actual GPU code where most of the problems lie.