Please improve the glSlang spec!

Originally posted by sqrt[-1]:
[b] [quote]Originally posted by Korval:
[quote]What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?
We had a couple of threads about this. It came up when we found that nVidia’s glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.
[…]
[/QUOTE]Call me crazy but those two reasons are things I actually like about GLSL above Cg.

Besides, I think these are just “personal taste” issues and putting an extra .0 after float constants and a float(…) around casts is not any reason to think that a language is not “good”.

(Also consider that since GLSL is compiled by the driver we want as few ambiguities as possible in code. ie. if I have a sin(float x) and sin(int x) I would pefer to not let the driver choose which one is called )[/b][/QUOTE]The point is not so much that you must define floating point constants as, say, 1.0, what is ridiculous is that the spec goes to great lengths to define, for example, vector by scalar multiplications, but forgets or decides not to define integer by floating point multiplications (or other cross type operations).

One of the ways of solving that is by type promotion, the other is by combinatorial explosion of the operations.

C++ has clear-defined rules for promotions/overloading resolution, etc. There’s no compiler ambiguity involved.

Call me crazy but those two reasons are things I actually like about GLSL above Cg.
You’re crazy.

:cool:

Besides, I think these are just “personal taste” issues and putting an extra .0 after float constants and a float(…) around casts is not any reason to think that a language is not “good”.
Why not? This is a language that has to be used, usually by programmers. If they don’t like it, if it has annoying “gotcha’s” in it for no real benifit, then that’s plenty reason enough to call the language not good.

if I have a sin(float x) and sin(int x) I would pefer to not let the driver choose which one is called
If you have those two in C++, there’s no ambiguity as to which may be called. The ANSI C++ spec defines which one gets called in all circumstances. Now, the user may not know the spec well enough to know the answer, but that’s not the spec’s fault.

The following matrices have been added to an interim version of the shading language spec. Hopefully, you’ll see them in reality sometime soon.
Thanks :slight_smile:

One of the ways of solving that is by type promotion, the other is by combinatorial explosion of the operations.

C++ has clear-defined rules for promotions/overloading resolution, etc. There’s no compiler ambiguity involved.[/QB]
I don’t really see the big deal with manual type promotion. Even in C++ with most compilers you will get warnings when combining float and integers in the same math statement (without manually promoting them to the same type).

(ie with
float a=5.0f;
int c = a;

you have to use:
float a=5.0f;
int c = int(a);

to get rid of warnings -which most programmers do)

(another example would be passing a float to a function that takes an integer)

Also (just a guess) but perhaps conversions between float-> int etc are not as “free” as they are in C++ and the GLSL people wanted to ensure they are only done at a users’ request?

However, this is not something I really want to quibble about so I’ll agree to disagree :smiley: .

Originally posted by Korval:
[b] [quote]What issues with GLSL do you think are the reason for it not being good enough? Would you rate CG or HLSL Good?
We had a couple of threads about this. It came up when we found that nVidia’s glslang compiler was playing fast-and-loose with the spec, allowing for things like C-style casts and autopromotion of integer values.

Both of these are expressly disallowed by the spec, and both of them should be allowed.

Because of this, Cg (HLSL is basically the same language) is a nicer language to use than glslang.[/b][/QUOTE]In our company we have a C++ style document that has been improving during the last ten years.
There is one thing that were added six/seven years ago. Basically, in a short description:

  • You must use the f sufix in every floating point value and every floating value should be written in its float format: 1.0f, 0.0f, …
    There is another one (added 4/5 years ago) that mainly says:
  • You must use the constructor syntax for type conversion (int(fVal), float(iVal), …). Or the C++ reinterpret_cast type cast operator: ptrtytpe2=reinterpret_cast<type2*>(ptrtype1)

This document is used by every programmer that is working or has been working in our company and noone has complained about that.
In fact, this document has been created by suggestions and revisions of seniors programmers.

float a=5.0f;
int c = a;
The issue was should the compiler automatically interpret (Not Cast!) an int to a float?

Example : float thing = 0;

Since many coders have the habit of doing this, the answer is yes.
The question further developed …
Should a compiler allow this (See front page for votes)?

Who needs casting and functions like sin(int x)?

There is one thing that were added six/seven years ago. Basically, in a short description:

  • You must use the f sufix in every floating point value and every floating value should be written in its float format: 1.0f, 0.0f, …
    There is another one (added 4/5 years ago) that mainly says:
  • You must use the constructor syntax for type conversion (int(fVal), float(iVal), …). Or the C++ reinterpret_cast type cast operator: ptrtytpe2=reinterpret_cast<type2*>(ptrtype1)

In fact I’m doing this, except using the f sufix, without ever reading a design document. :wink:

Btw, I’m no professional developer (yet), just have to study a year.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.