HLSL vs Cg = The Poll

In the same way as new CPUs don’t provide new real features anymore, but “only” better optimizations

Since when did CPU’s not provide new features? Doing Sin/Cos as an opcode is not something that all CPU’s provide. Vector-math (MMX, 3DNow, SSE, etc) is, also, a new feature. CPU’s are constantly evolving new features.

No chaos of different shader functionality any more.

That’s not going to happen in a year. There are still features that a few vendors have (per-pixel math operations like sin, log, etc) that others won’t, thus creating a dicotomy of languages.

And is there really a problem with having two slightly different feature-sets in the shader that are querryable? As long as the interface is the same, I can write different shaders easily enough. As someone mentioned, writing shaders is only a small part of writing any rendering system. It is hardly terrible that I may need to write several shaders for various hardware. As long as the interface to these shaders is the same, there isn’t much of a problem.

Originally posted by Jurjen Katsman:

There already was a long and deep discussion about exposing hardware limits in this discussion forum.

Folker: As that discussion seems to continue here (features, hardware limits, same thing), I feel justified commenting on it.

Originally posted by Korval:

Sin/cos/log are no new features, they are only performance optimizations, because sin/cos were also calculated on older CPUs by sub-routines. The same is possible for GPUs. Maybe first GPU implementations provide only very inaccurate implementations of sin/cos/log, e.g. a simple quadratic approximation for sin/cos. But I don’t see a problem there. So I don’t think that sin/cos/log are a real problem.

I agree with you that the first important step is a standard shader interface. This will make life a lot of easier, agreed completely. However I think having to support several shader code pathes is indeed ugly. Our experience is that it costs similar or even more development resources than developing rendering code both for OpenGL and Direct3D. And if different hardware only differs in “slightly different feature sets”, then it should be no problem for the hw vendors to implement one standard.

But this will only happen if there is one common goal. Otherwise, every hardware vendor will develop into a different direction. “It would be easy to implement, but we think that log is not important, so we won’t support it.” In Direct3D, every hw vendor is forced somehow to support the DX vertex / pixel program standard defined by microsoft. For OpenGL, OpenGL 2.0 could define an aim in a similar way. So the hw vendor will implement log to be able to call its hardware OpenGL 2.0 compliant.

Originally posted by Jurjen Katsman:
Folker: As that discussion seems to continue here (features, hardware limits, same thing), I feel justified commenting on it.

I think there are important differences between hardware limits and (other) features.

First, in contrast to features, hardware limits are hard to define (see the asm vs. HLSL discussion. At the end only possible solution: A gl-function can-this-shader-be-executed-questionmark.) Second, probably the step to support all features of a full programmable gfx hw is smaller than support of unlimited resources.

So I think in practise there are three steps:
a) A standard interface for shaders. Is useful immediately also for todays hardware.
b) A standard full-programmable standard shader langugae. Will be naturally supported by hardware in near future. Currently only missing hw features are flow-control and generic texture lookup (sin/cos/log etc. are sub-routines).
c) No hardware limits. Possible, but requires additional work (e.g. f-buffer). See the long previous discussions about this topic. But maybe this problem vanishes automatically if future hardware has limits which are equal to infinity in practise.

These steps are somehow natural. The “only” question is: What should be a required feature for “OpenGL 2.0 compliant”, and what should be left to OpenGL 2.0 sub-functionality extensions (compatible to full OpenGL 2.0). In my opinion, OpenGL 2.0 should require all of them. If every hw vendor cries “c) is not possible!!!”, then - but only if there is really no other way - c) OpenGL 2.0 should require only a) and b).
As mentioned, at the end this is only a naming question what you want to call “OpenGL 2.0”. But since b) is already quite near, and OpenGL 2.0 should be future oriented and define the direction for the future (instead of focusing too much on current hardware), I would suggest a hw can be called OpenGL 2.0 only if it supports all a), b) and c). At minimum a) and b).

For example:
a) alone can be called OpenGL 1.5
a) together with b) can be called OpenGL 1.6
a) and b) and c) is called OpenGL 2.0.

Some additional note:
I think it is a good idea if OpenGL 2.0 sets a vision / direction for the future, instead of only standardizing existing features (like the two-year-old ARB_VERTEX_PROGRAM). I think setting a standard for the future reflects exactly the spirit of OpenGL. And this was the reason why OpenGL didn’t have to change for so a long time, whereas D3D changed its architecture again and again in the meanwhile.

So I think it is important that OpenGL 2.0 sets a standard for the future, instead of only “not ignoring” the future.

Didn’t the minutes of the last ARB meeting record that NVIDIA weren’t offering Cg to the ARB as part of OpenGL?

BTW. Why are we talking about modern HW ?
And today’s HW difrences.
A standard isn’t done like days, or even months.
It will probaly take at least 1-2 years till OGL2.0 will be aproved and we’ll have first implentations.
And then probaly all modern mainstream HW will be OGL2.0 compatible.
Then noone will think about GF1 or GF2 ( look at modern games…gf1 and gf2 isn’t enough for them…and gf3 is like minimum ).
Till then most vendors will be probaly able to support 100% OGL2.0.

And remember we speak here about HLSL - not a SIMD language in NV_vertex_program !.
The lang itself will do compile-time optimizations etc.

Originally posted by dorbie:
Didn’t the minutes of the last ARB meeting record that NVIDIA weren’t offering Cg to the ARB as part of OpenGL?

That’s what I remember reading, as well. I’m confused. I guess nVidia changed their minds? Or would they be retaining ownership of the language specification and giving a license for usage by those implementing OpenGL?

– Jeff

http://biz.yahoo.com/prnews/020723/sftu012_1.html

That press release just talks about the compiler receiving an open-source treatment. In order to become part of the OpenGL 2.0 specification as the standard HLSL, I believe nVidia would be required to give up control over the /language specification/, not just the compiler source.

The language specification has never been mentioned as part of the package that nVidia would be releasing control of, and ARB notes show nVidia reviewing the CG language while specifically stating that they were not offering it to the ARB for consideration.

That is the part I’m curious about.

It’s good to know the compiler source has been released, though - it /should/ allow any individual to write back-end profiles for whatever shader language they’d like to have come out of the compiler…

– Jeff

Why are we talking about modern HW ?
And today’s HW difrences.
A standard isn’t done like days, or even months.
It will probaly take at least 1-2 years till OGL2.0 will be aproved and we’ll have first implentations.
And then probaly all modern mainstream HW will be OGL2.0 compatible.

By modern, of course, you mean “cutting edge $400+ card” rather than “mainstream HW that 80% of the gaming public has in their machines.” I’m not interested in an API that doesn’t support the mainstream hardware.

In one to two years, mainstream hardware will be DX8.0/8.1 cards (low-end GeForce3’s and Radeons). That means OpenGL 2.0 will be absolutely, totally, and in all other ways useless to us game developers unless it can actually make use of the features of those cards.

Most modern games are built assuming the user has a GeForce1/2. And yes, they are enough to run them at reasonable (but not at 1280x1024 or 1600x1200 like most benchmarks try to). GeForce3 is hardly a minimum for the vast majority of games, unless you need 100+fps.

Mainstream tends to lag behind current hardware by a good two years. When the GeForce 3 came out, most games were being developed assuming that the user had a TNT2 of some kind. When GeForce 5 comes out, you’re looking at the GeForce2MX as a base. When GeForce 7 comes out (with full GL2.0), you’re looking at GeForce 3’s being prevelant. Few are the game developers that are going to waste their time with GL 2.0 when it will only touch a small fraction of the installed base. It’s just rediculous to develop an API/language solely for the purpose of supporting hardware that:

1: Doesn’t even exist today
2: Won’t be mainstream for the next 3-4 years.

It’s easy for us programmers to go off and buy a R300-based card and assume that this is what everybody has. But that’s simply not the case, and game developers know it. The ARB needs to understand that whatever they develop must be backwards compatible to the level of the GeForce3. Otherwise, the gaming community (minus a few hardheads like Carmack) will abandon GL as a game-programming language.

Originally posted by Korval:
In one to two years, mainstream hardware will be DX8.0/8.1 cards (low-end GeForce3’s and Radeons). That means OpenGL 2.0 will be absolutely, totally, and in all other ways useless to us game developers unless it can actually make use of the features of those cards.

and in one or two years dx8 / dx8.1 games will be out everywhere. and in one or two years people start developing games for in from now on 4 to 6 years. and then dx9 is older standard.

gl2 should NEVER NEVER NEVER NEVER be a standard with thousand version fallbacks for older hw. it should draw a final line, finishing all that mess. not important if it is mainstream in 4 years from now, or 5. till then nvidia is there and feeds us with new versions of cg all half year. gl2 wants to standartise shaders. thats not possible with current and even not yet with the nextgen hw, even while they are very powerful.

gl1 was a standart for highend pc’s, not for gamers.

Originally posted by davepermen:
gl2 should NEVER NEVER NEVER NEVER be a standard with thousand version fallbacks for older hw. it should draw a final line, finishing all that mess. not important if it is mainstream in 4 years from now, or 5. till then nvidia is there and feeds us with new versions of cg all half year. gl2 wants to standartise shaders. thats not possible with current and even not yet with the nextgen hw, even while they are very powerful.

Where can I sign your petition?

gl1 was a standart for highend pc’s, not for gamers.

And the only reason OpenGL was adopted for games in any way was because it was superior to the jumbled mess of trash that was D3D 3.0. Had Microsoft done a decent job with D3D earlier, they wouldn’t have to contend with GL as competition to D3D.

Precisely what is so bad about having backwards-compatibility extensions in the shader language? What is so terrible about designing the language to be useful in writing GeForce3 shaders? It doesn’t hurt the language in any way to do so, because they are all extensions that one can choose not to use. The only thing it does is make GL 2.0 more inclusive to hardware rather than exclusive.

gl2 wants to standartise shaders. thats not possible with current and even not yet with the nextgen hw, even while they are very powerful.

And, yet, D3D 9 seems to have done a reasonable job of it. And I’ll bet that D3D 10 does a reasonable job of standardizing its additional features, too.

Originally posted by folker:
Where can I sign your petition?

Amen, brother.

I really don’t understand the “must work on current mainstream hardware” viewpoint. Neither side disputes that generalized programmability is the way to go. Future hardware will be able to do that. Current hardware can’t, for varying values of “can’t”. Cut the pie any way you like, the changeover is going to be messy. So what? GL2 is going to be backward compatible. For a few years folks will be writing engines with an all-singing all-dancing GL2 backend and fallback extended-GL1.x backends, just as they write NVGL and ATiGL and fallback standard GL backends today. There’s no need to bastardize GL2 when extended GL1.x exposes the current idiosyncratic functionality as cleanly as it can be exposed. It’s ugly but it won’t last for ever.

GL2 is supposed to be the light at the end of the tunnel. If it’s just going to be another mess of vendor-specific subsets when we get there, what’s the point?

I agree MikeC.

Keep OGL2 clean and get it out now. This way, in 2-3 years, however long it takes to make a game, the mainstream will have the hardware to support it, and the games should have less bugs because effort is put into the actual game instead of debugging graphics and stuff for 25 versions of hardware extensions out there.

If a clean slate isn’t used, then things keep staying like that one golfing joke (for those who’ve heard it): hit the ball, drag Fred, hit the ball, drag Fred …