OpenGL's Demise

OpenGL’s most formidable opponent is not M$, but it’s lack of organization. There are handful of video hardware companys who are supporting it by providing their own extentions to OpenGL. When this happens, OpenGL is not longer “open”, but propietary. Every time I see someone using some new feature of NVida, etc. I ignore it. This “feature” is not applicable to the masses–by the way, this is where the money is and where there’s money, there’s a chance for survival and growth, that’s reality. When I program for file I/O, I don’t have to be concerned with SCSI, IDE, EIDE, RAID, etc. But, if I want to do some advanced grahpics programming, I then have to be concerned with using extentions provided by different vendors. This is where the organization is needed: provide a consistent interface to the new underlying technology, regardless of the vendor. This is what M$ does and subsequenlty DX does. This is what M$ did in the 80s and what made them a behemoth.

Can the hardware companies organize to provide a non-proprietary API to the masses exposing their new and rapidly advancing technology?

I hope so.


Your post is misleading, not all DX functionality is available on all cards. With OpenGL extensions vendors have the option of innovating without begging M$ to introduce an feature. You are listing a strength as a weakness and vice versa. Most of OpenGL 1.1 was available as extensions to 1.0 long before 1.1 was available. This is also true for every OpenGL release since.

What’s all this “to the masses” crap? OpenGL is available now, the extensions of various types are there and anyone is free to use it. The hardware vendors disagree on how and what functionality should be exposed, that’s it. They have a right to this disagreement. Perhaps if they were forced down one particular design route they might not support OpenGL as an option at all.

[This message has been edited by dorbie (edited 07-14-2002).]

Originally posted by dorbie:
[b]Perhaps if they were forced down one particular design route they might not support OpenGL as an option at all.[b]

By that reckoning, vendors would not support Direct3D either.
It’s nonsense to suggest that vendors have a choice as to whether they should support opengl or not. They’re not doing it for anything but commerical reasons. A lot of games developers use OpenGL exclusively - ID software being the main one. OpenGL is used in the simulation/visualisation industry, almost exclusive to any other API.
The problem with OpenGL is that it takes too long for ARB extensions to appear - it actually takes longer than it take MS to release another version of DirectX.

What I mean by the masses is having a feature available that can be coded once and not for each hardware vendor’s particular implementation.

Even though DX functionality may not be available on all cards, there’s a single consisdent interface to this feature/functionality. And that’s my point. This is why I used file I/O as an example.

The functionality of File IO hasn’t changed for years, perhaps even decades. Whereas in real-time 3d graphics new features appear, and change all the time.

I suppose eventually, when there is nothing left to innovate, then 3d graphics will be in the same situation as file IO. Identical on all platforms. But dont expect this for prolly 20 years or so.


That’s not true about file I/O. Asynchronous I/O, which is pretty much necessary for high-performance solutions, does not yet have a standardized interface. It’s not even supported on many Windows platforms (i e all 16-bit Windows). And it uses different system calls on different flavors of UNIX.

Anyway, if you cover nVIDIA extensions and ATI extensions, with a fall-back to plain-ish OpenGL, then you cover 90% of the hardcore gamer market with non-fallback cases, and all of the market with the fallback. Them’s just the rules, and you can either play or not.

I personally think that “Intel Built-in 3D Graphics Decellerator” is much more of a problem for anyone trying to deliver high-quality graphics to a mass market than any API warts or extensions. And now they have the “extreme” version, which at least has 4 virtual texture units. Of course, it’s still sharing a single frickin’ PC-133 memory stick with a 2 GHz CPU!!! AAAARRRGHGHGHGHG!

>>>But, if I want to do some advanced grahpics programming, I then have to be concerned with using extentions provided by different vendors.<<<
Don’t forget that you don’t have to use extensions. Or on the other hand, the community could all decide to support one vendors extensions and ignore the competition on condition that they make it available to other vendors.

Bah … this is just growing pains, as mentioned above. Also, I think the ARB has a right to take its time, but should have planned ahead. GL 1.3 should have had one or 2 features of 2.0, as extensions.

What are you working on now, Lob?



rubbish, OpenGL allows them to expose functionality in a way they choose. OpenGL would rapidly be deprecated if vendors developed no new extensions and one reason it’s supported is this flexibility. There’s a mutually supportive relationship there. Having another ‘standard’ imposed would just not work IMHO and indeed cannot, there are more players involved than ATI & NVIDIA, but those guys can barely agree on the time of day. You’re still ignoring the fact that D3D features are far from universally supported.

V-man has a point about the power of developers to drive this through selective use of extensions, and it makes me think of Carmack’s latest .plan and his pending lobbying effort for OpenGL 2.0. Maybe there’s light at the end of the tunnel, but it is optimistic to expect that developers can cooperate to this degree given that vendors can’t.

Originally posted by dorbie:
You’re still ignoring the fact that D3D features are far from universally supported.

‘far from universally supported’? I don’t think so…
If a card can’t do EMBM, then it can’t do EMBM, whether in d3d or opengl - if a card can do EMBM then it will do it using the standard interface in d3d, while opengl provides a huge list of vendor specific extensions to do the job. I won’t start banging that drum again, I’m sure everyones aware of my opinion on vendor specific extensions doing stuff that should be standard by now.

I’ll add some good news.

This was posted on the main page but for some reason deleted. I managed to save the link,

Looks like everything worked out for the ARB in the end ( regarding the vertex programming extension ).

I think there is often a misunderstand on just what it takes to make a platform independant structure like OpenGL. We get into similar arguments at work. The problem is not the structure, design, or even the time it takes to vote on new changes. The problem is that it IS a platform independant library. That means that even IF all the PC card manufacturers agree on a method, you still have the NON PC folks. Archetecture drives newer technology what may be easy to build on a PC may be more difficult for SGI and vice versa. So it is almost impossible to “predict the future” and get a sound structure that is unchangeable.

The vender extensions, I believe, are a good idea, it gives strength to OpenGL on a pre-release basis for those programmers on the “bleeding edge” as long as the vendors fully back implimentations by the ARB voting sometime later. Also, as pointed out here, programmers/developers get to drive a small portion of the ARB voting through utilization of those extensions.

My method of utilization is using primarily OpenGL with ARB extensions (market wide equivalence) and where the Vendor extension will save me time and/or money in significant portions, I will use them. I will not, however, use a Vendor extension to pull in only 1/2% increase in performance; but I might for an ARB extension.

I’m still waiting for OpenGL 2.0 with more optimism than pessimism (hey, which is something new for me), but I am sure that will change after I find out how much work I have to do to support it.

>>>The problem is that it IS a platform independant library. That means that even IF all the PC card manufacturers agree on a method, you still have the NON PC folks. Archetecture drives newer technology what may be easy to build on a PC may be more difficult for SGI and vice versa. <<<

I haven’t really understood this “multiplatform = extra difficulties” business.
2 things can happen. GL is ahead of times and noone can implement the whole thing in hardware (let’s say hardware is fastest)
or GL is behind so many vendors offer similar features with minor differences and offer them as extensions, while some vendors have no model to follow.

Ahead of time sounds better than a whole lot of arguing about “we are not ready to implement this feature yet, so we don’t want it in core opengl”.

Or I misunderstood the problem.



That PR release of OpenGL 1.4 was released in error by SGI.

It hasn’t been voted on yet.
Please spread the word as this release is confusing enough as is.


lobstah, I would not see all of this so negative. I am personally even very happy about that the developers offer the new things straightly in the newest drivers and that you don’t need to wait half a year. Of course I would also prefer, if the vendors would standarize everything straightly when a new extension appears. But this will never happen I think. After all the vendors would not sell more of theirs cards, if they would standarize it, would they?

I’m personally thinking the way “GeForce, TNT, ATI Radeon, ATI Rage and Intel chipsets, the rest doesn’t matter.” It should work on the rest as well, but these listed ones simply own the market and someone who has an older or more exotic card has to await graphical errors or that it doesn’t run at full speed on his card. And please don’t tell me D3D would be more standarized, because it isn’t a bit. The interface itself is, but what does this help you, if D3D tells you the feature is supported, although it isn’t a bit. Can still very good remember all the work arounds for the Intels and Voodoos in D3D.


Knackered that’s one feature. The situation is far more complex than you pretend, with a rigid feature set either designed around a lowest common denominator or excluding functionality that a subset of cards support (it really depends on the feature). You want to pretend that you can have your cake and eat it, you can’t.

But… who’s Denise?

I’m talking about a unified interface, dorbie. I’m not saying small compromises don’t have to be made. Compromises are acceptable when the interface is unified - lots of compromises were made when opengl first came into being. Lots of hardware had to be designed with limitations of the api in mind, and extensions were only introduced to give us new features, and other vendors wrote to those extensions specs, rather than producing extensions with the same functionality, but with different interfaces for purely commercial reasons.

So we agree that compromises are inevitable, however I’m saying the incentive to support OpenGL diminishes if vendors are forced to make these compromises early. The horsetrading over exposed features and common API is inevitable but vendors still expose their own functionality through their individual extensions. This is a key benefit of OpenGL for them.

Lets think about just how platform independant opengl is. Well it is very platform independent. It runs on everything but lets look at the list of 3d chipset makers that have good working drivers on a variety of platforms. Right now nVidia is the only company (in mainstream graphics cards) that has a linux driver that is as good as their windows driver. RIght now ATI is just finally getting around to creating a unified driver system. Just wanted to remind you guys that a platform independant interface doesn’t mean that its available on all platforms.