Tom's GLSL new demo

I thought my comment on nVidia’s implementation being “deficient” would raise an eyebrow or two, but Mark Kilgard himself… wow :wink:

Wouldn’t it seem the implementation upon which the shader does not work is the deficient one?
No. Since the shader violates the spec in several places, properly compiling it without error is the wrong behavior.

The typecast-operator alone should have immediately thrown up a syntax error. But, like a good Cg compiler, it just took it.

It’s like having an implementation of ARB_fragment_program that, when shadow textures are bound, does the depth compare operation in clear defiance of the spec… oh wait, nVidia’s GL implementation does that too… :rolleyes: :wink:

The point is that it is perfectly acceptable to call an implementation of an extension that does not follow the spec deficient.

Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages.
Which is both true and a perfectly legitimate thing to bring up when the language was being defined. And I’m pretty sure you guys did. However, you lost.

The correct decision at that point is to accept the loss and do what the spec says. It is not acceptable to violate parts of the spec just because you just don’t agree with them, even if the disagreement is perfectly reasonable and rational. This confuses shader writers who need cross-platform portability. Suddenly, what seemed like a perfectly valid shader on one card fails to even compile on another.

Look, I agree that there’s a lot of nonsense in glslang. I can’t say I’m happy with the language; there’s lots of stuff in there that looks like it was added solely to be different and wierd. I probably would have preferred that Cg became the OpenGL shading language, or something similar to it. But we have to adhere to specs, even those we disagree with. If we don’t, we create chaos and further weaken OpenGL.

inability to override standard library functions
Wait. It has that. I forget what you have to do, but I definately remember reading about precisely how to do it in my OpenGL Shading Language book.

NVIDIA’s GLSL implementation has a lot of Cg heritage so that constructs that make sense in C and C++ typically “just work as you’d expect” in GLSL.
But it doesn’t have to. 3DLabs was “nice” (read: desperate for attention) enough to provide a full parser for glslang that would catch the vast majority of errors that nVidia’s compiler lets through. The idea for releasing this was so that there would be some conformity in compilers. Apparently, you just decided to shoehorn glslang into nVidia-glslang.

If you’re having a meeting with your lead programmer, and he comes to a decision you don’t agree with, then you argue with him. Either you convince him that he’s wrong or you don’t. However, when the meeting is over and a decision is made, you either follow through or quit. Back-dooring the language like this is just unprofessional.

I was getting pretty stoked for an NV40-based card. But this complete and total lack of willingness to ahdere to a spec, more than anything, even the news of ATi upping the number of pipes in their new chips, is sufficient reason to keep a Radeon in my computer. At least, I can be sure that any shaders I write will work anywhere…

The correct response to, “Your compiler is in violation of the spec” is not, “We don’t agree with the spec because it’s silly.” The correct response is, “We recognise this to be an error, and we will fix the problem at our earliest convienience.” I would have accepted, “Our glslang compiler was built by shoehorning our Cg compiler to accept the language. Doing this, however, did leave langauge constructs that Cg provides open to the glslang input path. We intend to correct this as our glslang implementation matures.”

Our extended features are there just for the convenience of developers.
Extending the language is one thing. Perfectly reasonable with valid extension strings/specs. Changing it’s syntax, making a syntax acceptable that isn’t, is quite another.

Originally posted by Korval:
]No. Since the shader violates the spec in several places, properly compiling it without error is the wrong behavior.

:confused: The shader doesn’t violate the spec. The shader is compiled according to NV_Cg_shader spec, which is described.
So it’s definitively NOT a wrong behaviour.

Enabling shader conformance on the driver by default is a very bad idea. A user with nvidia driver doesn’t need to get any warnings. The solution will be, for example, adding a pragma or a define command to a shader that will be disabling all Gg features.

Originally posted by simongreen:

It is NVIDIA’s intention to provide a 100% conformant GLSL implementation. Our extended features are there just for the convenience of developers.

Waht about noise. Will it be fast?


Personally, I think GLSL as a language is fine - once you’ve used one C-style shading language you’ve used them all! There are a few wierd idiosyncrasies in the spec, but we’re working with the ARB to resolve these. I don’t think a device driver is really the right place for a high level language compiler, but that’s another story.

Ok C-Style isn’t IMHO the best design but wide known. What is in your opinion the problem to have the compiler in the driver? If I get enough standardize feddback I see no problem. But I’m not a guru. :slight_smile:

Originally posted by Zengar:
:confused: The shader doesn’t violate the spec. The shader is compiled according to NV_Cg_shader spec, which is described.
So it’s definitively NOT a wrong behaviour.

But we are talking about a shader which was described as using “using the OpenGL Shading Language” (taken directly from the news post) and as such it should be expected that code which is for the OGSL would compile on any OGL release which supports the OGSL spec, however the examples written by Tom DONT conform to the spec (not his fault, its just how the drivers are written) thus they are not valid OGSL shaders thus the compiler is broken/wrong. If Tom was putting out a Cg shader, then fine, wonderfull no problem carry on regardless, but he belived he was making a OGSL shader, not some weird hybrid, the compiler never should have let his code pass and we wouldnt be having this discussion now.


Enabling shader conformance on the driver by default is a very bad idea. A user with nvidia driver doesn’t need to get any warnings. The solution will be, for example, adding a pragma or a define command to a shader that will be disabling all Gg features.

On the flip side, you write your code you belive to be conforming, you release it to the world at large and the world at large goes “lovely, now why doesnt this code work on my Radeon/Deltachrome/FireGL/Wildcat?”. At which point all the critics point and say “look, this is why D3D is better, it doesnt have this problem with its base shader language, opengl is Just Bad™”.
Conformance should be on by default, that way everyone has a common base to aim for, turning if off via a define is again, fair enuf, you are saying to the compiler “I know better, do it this way” but that shouldnt be the default operation.
By all means relax the restrictions later if needed (much like how the Render Target extension is being made), or even have the gfx card maker produce a {NV|ATI}_Custom_Shader extension if you REALLY feel the need so that the program can use a shader designed for that card (although, this does run contrary to the nature of the OGSL).
Default conformance off = shaders produced which only work on one series of cards = versioning problems = the GL_CLAMP issue all over again.

(if you hadnt guessed, i’m very much “pro standards”, regardless of who is breaking what)

Originally posted by Zengar:
Enabling shader conformance on the driver by default is a very bad idea. A user with nvidia driver doesn’t need to get any warnings. The solution will be, for example, adding a pragma or a define command to a shader that will be disabling all Gg features.
Agreed, forcing the user to activate/deactivate language conformance is bad, but that’s not necessary.

A good solution would be having a “#pragma GL_NV_insert name here” in the shader or a glEnable(GL_NV_insert name here) or something like this when you want to use the nVidia language extensions. This way, everyone who wants to use the language extensions can without bothering the user to do something, but it is not possible to “accidently” use them like Tom did.

In my opinion, extensions should be turned on when you want them, not turned off when you don’t want them. If I want to use something, I have no problem with having to turn it on, but if I don’t want to use it, I don’t even bother if its there, let alone turning it off.

Originally posted by Cab:
[quote]Originally posted by evanGLizr:
I don’t think cab’s solution of using Cs, Cd… directly as varying instead of gl_TexCoord[n] works without changing the application
No. All the gl_TexCoord[…] are just varying parameters and are not used as input parameters for the vertex.glsl.
[…]
So you don’t need to change the app and the solution works properly :slight_smile:
[/QUOTE]Doh, my bad, you are absolutely right.

I don’t want to sound like a funboy :wink: , and I agree with all of your arguments.
But: if Tom can’t read the spec it’s not Nvidia’s problem, isn’t it? Driver documentation describes all the differences quite clearly.
(didn’t mean to offend you Tom, you are indeed one of the few people on the net that I trully respect and like)
My opinion is that you are making an elephant of a fly. Althought I agree that enabling command would be better that my proposal. (I’m the one who doesn’t like any standards, if you haven’t noticed :smiley: )

Originally posted by evanGLizr:
Did you remember to change vertex.glsl?.
The correct code is in my first post; I don’t think cab’s solution of using Cs, Cd… directly as varying instead of gl_TexCoord[n] works without changing the application (although that would be the recommended fix if you could modify the app), because I believe the the app is hardcoded to use fixed slots for the program attributes (gl_TexCoord[0…5]) instead of querying for Cs, Cd, etc.[/QB]
Nice catch. vertex.glsl was the only problem.

I tried Cab’s solution also and that works as well for me and should on all systems.


varying vec4 Ca = gl_TexCoord[0];
varying vec4 Cd = gl_TexCoord[1];
varying vec4 Cs = gl_TexCoord[2];

varying vec4 V_eye = gl_TexCoord[3];
varying vec4 L_eye = gl_TexCoord[4];
varying vec4 N_eye = gl_TexCoord[5];

this is suppose to alias but only works on NV?
It’s best to have a reserved keyword like in ARBvp+fp

ALIAS thing = the_other_thing;

cause otherwise it looks like it’s using some other attribute.

I’ve uploaded a new zip with Cab’s bugfixed shaders in it. I hope it works for everybody now. Thanks a bunch, Carlos!

I also looked into the “strict shader portability warnings” option in NVemulate. My original shaders compile with or without it. Simon said “future versions of our driver will have an option to enforce GLSL conformance”, so I guess the current checkbox in NVemulate does something else. No doubt it’s described elaborately in the documentation I didn’t read :wink:

– Tom

The strict option in nvemulate turns on warnings that will show up in your info log.

I must agree with the ones that thinks that you should relax the tookens by putting a #PRAGMA something in the file, instead of relaxing it by default.

Mark and Simon, you claim its to help developers? i say its not helping at all, as a developer you more than often want the program to work on any glsl implementation, and that means sticking to the agreed specification whatever you like it or not, so relaxing by default only creates frustrated programmers. And even if the compiler itself actually can be said to be conformat as long as it compiles glslang shaders the same cannot be said about shaders using things that arent in the spec. So when/if nvidia puts out examples of glslang code, the shaders better confirm to the spec, or its not glslang examples anymore. I really hope you help the developers by NOT allowing compliation of non-glslang shaders by default, and let you unlock the relaxed tookens by a #Pragma.

Originally posted by Tom Nuydens:
I’ve uploaded a new zip with Cab’s bugfixed shaders in it. I hope it works for everybody now.
Just downloaded and tried it! Works on Radeon 9800 Pro with Catalyst 4.4! Nice work, Tom! :slight_smile:

I like all the additional stuff in NV GLSL implementation.
Of course, compiler should warn us about incompatible code, but hey anyone recalls DEBUG???
And, as all this stuff is writen in drivers, it’s easier for one company to include innovative approaches & let others to update, by patching docs & without adding a GL_GLSL_101, GL_GLSL_102, GL_GLSL_102b, GL_GLSL_103, GL_GLSL_105a, … GL_GLSL_199. As long as old shaders compile it’s OK, but newer will allow to do more.

Originally posted by M/\dm/
:
I like all the additional stuff in NV GLSL implementation.
But this is NOT GLSL.

Originally posted by M/\dm/
:
Of course, compiler should warn us about incompatible code, but hey anyone recalls DEBUG???
And, as all this stuff is writen in drivers, it’s easier for one company to include innovative approaches & let others to update, by patching docs & without adding a GL_GLSL_101, GL_GLSL_102, GL_GLSL_102b, GL_GLSL_103, GL_GLSL_105a, … GL_GLSL_199. As long as old shaders compile it’s OK, but newer will allow to do more.

When this “additional stuff” becomes GLSL then it will be a good thing to expose by default as the appropriate GLSL version. Now it is not.

Originally posted by Korval:
[QB]I thought my comment on nVidia’s implementation being “deficient” would raise an eyebrow or two, but Mark Kilgard himself… wow :wink:
Quite an achievement! :smiley:
But you’re still not at my level. :wink: I once managed to get an email from him because of a topic on opengl.org. Now THAT’S an achievement!! :smiley:

Originally posted by Mark Kilgard:

Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages.

Much thanks to you nVIDIA folks for putting effort in to making Cg (and GLSL) easier for the developer – language constructs, the SDK, and support for Linux. I hope to see something like the CgTutorial for GLSL.

Again, much thanks for your contributions!!! They are appreciated by many.

I still have no ide how breaking apart the language in to different noncomaptible sections are going to help developers…

Wouldn’t it seem the implementation upon which the shader does not work is the deficient one?
Usually with languages, the compiler that catches the most errors is the better one.

Strict GLSL has a lot of deficiencies that will frustrate anyone used to C-style languages. An incomplete standard library
A hardware vendor is allowed to extend the standard library. We should have no problems here.

lack of reasonable type promotion, lack of casting constructs,
Casts are done with constructors.

E.g. vec4(v), not (vec4)v.

Note that C++ language design recognized problems with C-style typecasts, and did its best to deprecate it.

inability to modify varying and uniform data,
Sounds like good SIMD design to me.

inability to override standard library functions,
Not true, the spec allows overriding standard library functions.

failure to support passing structs to functions,
Not true, the spec allows passing structs to functions.

A shading language should not be a straightjacket that forces you to write shaders in cumbersome ways just to satisfy a language specification that left out many of the practices that make C and C++ such rich, useful languages.
In my study of languages, much of what you call rich and useful is considered error prone, and some of that is supported only for backward compatibility to times before people knew better.

However, I would support some auto-promotions, once carefully architected into the language.

There’s stuff about GLSL that just makes reasonable programmers wonder “what were they thinking??” such as the decision to have row-major arrays (C-style) yet column-major matrices (FORTRAN-style).
Not true. The language doesn’t have row-major arrays, it just has one-dimensional arrays. This should be extended in the next version, and support for row-major matrices should come along with it. Arrays should become first class objects at the same time. (In my opinion.)

I’m am quite aware of perhaps more deficiencies in the language than Mark is. The reason they exist was to have something fully thought out and consistent to ship with soon. Adding in extra features before they can be added carefully to the language leads to chaos. The language we are starting with is simple and solid, and that’s the right first step.

JohnK

  • Mark Kilgard

Let me tell you from my experience reading other people c/c++ code you do want the lang. do things one way and not multiple ways. C++ is king in doing one thing zillion ways so what must you then do when you come across a code written in some unfamiliar way to you? That’s right, open up the c++ standard and learn that new syntax rule(s). That is a major time sink. The discrepancy among coders is what’s the problem is and allowing them to write shaders in non-standard way. Isn’t STL a savior? I think so.

Originally posted by JD:
Let me tell you from my experience reading other people c/c++ code you do want the lang. do things one way and not multiple ways. C++ is king in doing one thing zillion ways so what must you then do when you come across a code written in some unfamiliar way to you? That’s right, open up the c++ standard and learn that new syntax rule(s). That is a major time sink. The discrepancy among coders is what’s the problem is and allowing them to write shaders in non-standard way. Isn’t STL a savior? I think so.
Well, one could argue, that a glSlang shader is a lot shorter than a c program, so reading it wouldn´t be such a big problem (and glSlang is still a lot simpler, too), but in general i have to agree with you. I don´t want/need a compiler which can parse everything, i only want a compiler which works and supports hardware-features, i don´t bother much about compiler-features.

And then there are people who moan, that they are driver-writers, not compiler-writers, and why they need to implement a high-level-language-compiler…so why even further complicate it?

Jan.