We are discussing the future… Do you have facts from the future?
No, but I’m not the one making factual claims either. I’m providing evidence that the ISA approach can work as good, if not better, than the glslang approach.
Things changed alot since the 8086 and, as you know, most code from that days won’t run properly in today’s systems and vice-versa.
They won’t run on today’s OS’s, or maybe motherboards, or other hardware. However, the fundamental machine language itself can be executed on a P4 just as well as a 286 (assuming that 32-bit extensions or other instruction-set extensions aren’t in use).
If you hand x87 assembly nicely scheduled for a 486DX to a P4, you’ll lose.
Define “lose”. To me, a loss would be, “It runs slower than it did before.” A win would be, “It runs faster.”
Now, for the Intel x86 architecture case, this may be correct, because the processor is not allowed to do things like re-order large sequences of opcodes. It can do some out-of-order processing, but not to the level a compiler can.
In the case of this proposal for GPU’s, driver writers get the entire program to compile. Where the P4 can’t produce optimal instructions simply because it can’t help but work with what it’s got, the driver can compile it and do whatever re-ordering is required.
And, even so, let’s say that hardware 2 years from now running assembly compiled from a high-level language written today doesn’t perform as fast as it would if the high-level language were compiled directly. So? As long as it is faster than it was before (and it should still be, on brute force of the new hardware alone), then everything should be fine.
The conversion to any intermediate “this isn’t the real thing anyway”-language is completely devoid of any merit.
Unless you don’t want to be a slave to glslang, that is. If you, say, want to have options as to which high-level language to use, OpenGL is clearly not the place to be. No, for that, you should use Direct3D.
Maybe, for whatever reason, I like Cg more. Maybe, for whatever reason, I don’t like any of the high-level languages and I want to write my own compiler. Or, maybe my 2-line shader doesn’t need a high-level language, and I want to just write it in assembler.
The fact that you are happy with glslang does not preclude anyone else from not liking it, or wanting an alternative.
Java may benefit from this approach because the size of distributed code is a concern. Java also pays a very real performance penalty for it.
That’s not entirely true. JIT compilers, these days, can get native Java (anything that’s not windowed) to get pretty close to optimized-C. 80-95% or so. And these are for large programs, far more complicated than any shader will ever be.
And Java doesn’t use bytecode to shrink the size of the distribution. It uses bytecode becase:
-
They believe, as many do, that the idea of having people compile a program they downloaded is assanine and a waste of time.
-
They want to be able to hide their source code.
-
All bytecode is is an assembly language that the Java interpreter understands. They needed a cross-platform post-compiled form of code. The solution is some form of bytecode.
Why do we need to define and expose any sort of middle interface and layer an external compiler on top of that? Where are the benefits vs a monolithic compiler straight from high level to the metal?
You must mean, of course, besides the reasons I have given twice and ‘al_bob’ gave once?
And let’s not forget the notion that writing an optimizing C compiler is a non-trivial task. Neither is writing an optimizing assembler of the kind we are refering to, but it is easier than a full-fledged C-compilier. Easier means easier to debug, less buggy implementations, etc. And, because there will then only need to be one glslang compiler, all implementations can share that code.
Also, one more thing. nVidia is widely known as the company that set the standard on OpenGL implementations. They were the ones who first really started using extensions to make GL more powerful (VAR, RC, vertex programs, etc). Granted, Id Software didn’t really give them a choice, but they didn’t make nVidia expose those powerful extensions. I doubt there are any games that even use VAR, and even register combiners aren’t in frequent use, though 2 generations of hardware support them. Yet, nVidia still goes on to advance the cause of OpenGL.
nVidia has made no bones about not being happy with the current state of glslang. Now, they can’t really go against OpenGL overtly (by dropping support), because too many games out there use it (Quake-engine based games, mostly). But, they don’t have to be as nice about exposing functionality anymore. Or about having a relatively bug-free implementation. As long as those bugs don’t show up on actual games (just using features that real game developers use), it doesn’t hurt nVidia.
Also, they can choose not to provide support for glslang at all, even if it goes into the core. They can’t call it a Gl 1.6 implementation, but they can lie and call it nearly 1.6. Even Id Software can’t afford to ignore all nVidia hardware; they’d be forced to code to nVidia-specific paths. And by them doing so, they would be legitimizing those paths, thus guarenteeing their acceptance.
Rather than risk this kind of split in the core (where you have the core functionality that a good portion of the marketshare supports, and functionality that a good portion doesn’t. This isn’t good for OpenGL), the optimal solution would have been the compromise we’re suggesting here. There would be a glslang, but it wouldn’t live in drivers. It would compile to an open extension defining an assembly-esque language that would be compiled into native instructions.
That way, you can have a glslang that the ARB can control, but you don’t force all OpenGL users to use it.
Granted, the reason the ARB didn’t go that way was not some notion of, “putting glslang into drivers is the ‘right thing’.” No, it’s there because it hurts Cg, and therefore nVidia. ATi and 3DLabs have a stake in hurting things that are in nVidia’s interests. Killing the ability for Cg to be used on OpenGL in a cross-platform fashion is just the kind of thing that they would like to do to nVidia. And, certainly, using the glslang syntax over the Cg one (even though neither offers addition features over the other) was yet another thing ATi and 3DLabs wanted to do to hurt Cg; it makes it more difficult for Cg to be “compiled” into glslang.
[This message has been edited by Korval (edited 07-28-2003).]