I thought my comment on nVidia’s implementation being “deficient” would raise an eyebrow or two, but Mark Kilgard himself… wow
Wouldn’t it seem the implementation upon which the shader does not work is the deficient one?
No. Since the shader violates the spec in several places, properly compiling it without error is the wrong behavior.
The typecast-operator alone should have immediately thrown up a syntax error. But, like a good Cg compiler, it just took it.
It’s like having an implementation of ARB_fragment_program that, when shadow textures are bound, does the depth compare operation in clear defiance of the spec… oh wait, nVidia’s GL implementation does that too… :rolleyes:
The point is that it is perfectly acceptable to call an implementation of an extension that does not follow the spec deficient.
Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages.
Which is both true and a perfectly legitimate thing to bring up when the language was being defined. And I’m pretty sure you guys did. However, you lost.
The correct decision at that point is to accept the loss and do what the spec says. It is not acceptable to violate parts of the spec just because you just don’t agree with them, even if the disagreement is perfectly reasonable and rational. This confuses shader writers who need cross-platform portability. Suddenly, what seemed like a perfectly valid shader on one card fails to even compile on another.
Look, I agree that there’s a lot of nonsense in glslang. I can’t say I’m happy with the language; there’s lots of stuff in there that looks like it was added solely to be different and wierd. I probably would have preferred that Cg became the OpenGL shading language, or something similar to it. But we have to adhere to specs, even those we disagree with. If we don’t, we create chaos and further weaken OpenGL.
inability to override standard library functions
Wait. It has that. I forget what you have to do, but I definately remember reading about precisely how to do it in my OpenGL Shading Language book.
NVIDIA’s GLSL implementation has a lot of Cg heritage so that constructs that make sense in C and C++ typically “just work as you’d expect” in GLSL.
But it doesn’t have to. 3DLabs was “nice” (read: desperate for attention) enough to provide a full parser for glslang that would catch the vast majority of errors that nVidia’s compiler lets through. The idea for releasing this was so that there would be some conformity in compilers. Apparently, you just decided to shoehorn glslang into nVidia-glslang.
If you’re having a meeting with your lead programmer, and he comes to a decision you don’t agree with, then you argue with him. Either you convince him that he’s wrong or you don’t. However, when the meeting is over and a decision is made, you either follow through or quit. Back-dooring the language like this is just unprofessional.
I was getting pretty stoked for an NV40-based card. But this complete and total lack of willingness to ahdere to a spec, more than anything, even the news of ATi upping the number of pipes in their new chips, is sufficient reason to keep a Radeon in my computer. At least, I can be sure that any shaders I write will work anywhere…
The correct response to, “Your compiler is in violation of the spec” is not, “We don’t agree with the spec because it’s silly.” The correct response is, “We recognise this to be an error, and we will fix the problem at our earliest convienience.” I would have accepted, “Our glslang compiler was built by shoehorning our Cg compiler to accept the language. Doing this, however, did leave langauge constructs that Cg provides open to the glslang input path. We intend to correct this as our glslang implementation matures.”
Our extended features are there just for the convenience of developers.
Extending the language is one thing. Perfectly reasonable with valid extension strings/specs. Changing it’s syntax, making a syntax acceptable that isn’t, is quite another.