> If you are really the Mark Kilgard, I have to say, I’m rather shocked by your suggestions. In one of your recent postings, you said that “The Beast has now 666 entry points”. Do you really believe that a 666 (and growing!) of functions API is easier to maintain and extend than a more lightweight one?
I think (scratch that), I know the size of the API (whether 20, 666, or 2000) commands has little to do with how easy it is to maintain and extend a 3D API. Does that shock you? It might; I’ve worked on OpenGL for 18 years so I approach your question with a good deal of accumulated experience and even, dare I say, expertise on the subject.
I don’t think API entry point count has much, if really anything, to do with maintainability of an OpenGL implementation. It has far more to do with 1) investment in detailed regression testing, 2) hiring, retaining, and encouraging first-rate 3D architects, hardware designers, and driver engineers, 3) clean, well-written specifications, and 4) a profitable business enterprise that can sustain the prior three.
Those are the key four factors. I could probably list more if you forced me to do so, but those are really the four key factors. If you forced me to list 20 more, I’m confident API size would still not make my list.
> nVidia and ATI are maybe the most important contributors to GL3.0+. If you seriously doubt that removing DLs and GL_QUADS is a bad thing, why haven’t you prevented it back then?
I thought it was a poor course of action then; I think it’s a poor course of action now. I’ve done my best to prevent deprecation from hurting the OpenGL ecosystem. Deprecation exists, but I consider it to be basically a side-show.
NVIDIA doesn’t remove and won’t remove GL_QUADS or GL_QUAD_STRIP or display lists (or any of the so-called deprecated features). These features all just work. Obviously our underlying GPU hardware does (and will always) support quads, etc.
Now if YOU want to avoid these features because YOU think (or someone else has convinced you) these fully operational features are icky or non-modern, go ahead and don’t use them. But nobody has to stop using them, particularly if they find them useful/fast/efficient/convenient or simply already implemented in their existing code base. NVIDIA intends to keep all these features useful, fast, efficient, convenient, and working.
The problem is that someone’s judgment (be they app developer, driver implementer, or whatever) of what is good and bad in the API probably doesn’t match the judgment of others. My years of experience inform me that people tend to consider features they personally don’t happen to use as “non-essential” and ready fodder for deprecation. The fact that other OpenGL users may consider these same features totally essential and have built substantial chunks of their application around the particular feature you consider non-essential probably doesn’t matter much to you; I assure you the person or organization or industry relying on said feature feels differently.
What you might not appreciate (though I do!) is that this unspecified “other user” may be the one that does far more than you to sustain the business model that supports OpenGL’s continued development. CAD vendors used to say (this is less so now) they didn’t care about texture mapping; game developers would say they don’t care about line stipple or display lists.
For good reason, the marketplace doesn’t really let you buy a “CAD GPU design” or “volume rendering GPU design” or “game GPU design” tailored just for CAD, volume rendering, or gaming; the same efficient, inexpensive GPU design can do ALL these things (and more!) and there’s no specialized GPU design on the market that can do CAD (or volume rendering or gaming) better than the general-purpose GPU design.
That said, a particular product line (such as Quadro for CAD) can be and is tailored for the demands of high-end CAD and content creation, but the 3D API rendering feature set (what is actually supported by OpenGL) is the SAME for a GeForce intended for consumer applications and gaming. In the same way, when GeForce products are tailored for over-clocking and awesome multi-GPU configurations, that’s simply tailoring the product for gaming enthusiasts. This is much the same way there’s not a CPU instruction set for web browsing and different instruction set for accounting.
There’s a fallacy that if somehow the GPU stopped doing texture mapping well it would run CAD applications better; or if the GPU stopped doing line stipple (or quads or display lists), the GPU would magically play games faster. In isolation, the cost of any of the features is pretty negligible and certainly the subtraction of a feature won’t improve another different feature. There’s also been repeated examples of “unexpected providence” in the OpenGL API where a feature such as stencil testing, designed originally for CAD applications to use for constructive solid geometry and interference detection, get used to generate shadows in a game such as Doom 3 or Chronicles of Riddick
Said another way, if I concentrated on just the features of OpenGL YOU care about, I would likely NOT have a viable technical/economic model to sustain OpenGL. It’s probably also true that if I just concentrated on the features of unspecified “other user” of OpenGL, I would also NOT have a viable technical/economic model to sustain OpenGL. But in combination, the multitude of features, performance, and capacity requirements of the sum total of 3D application development create a value-creating economic environment that sustains OpenGL in a way that benefits all parties involved.
Knowing this to be true, how do you expect that “zero’ing out” features by deprecation is going to suddenly make other features better or faster. There’s a knee-jerk answer: duh, well, if company Z doesn’t have to work on feature A anymore, they will finally have the time/resources to properly implement feature B.
But that doesn’t hold up to scrutiny. Almost all of the features listed for deprecation have been in OpenGL since OpenGL 1.0 (1992). If the features were simple enough to implement in hardware 17 years ago and now you have over 200x more transistors for graphics than back then, was it really the complexity of some feature that has saddled copmany Z’s OpenGL implementation for all these years? Give me a break.
Moreover, feature A and feature B are very likely completely independent features with almost nothing to do with each other. Then you can’t claim feature A is making feature B hard to implement.
> Existing (old) APIs can use the old OpenGL features. But you should not encourage people to use these old OpenGL features in their new, yet to be created APIs and applications.
I encourage anyone using OpenGL to use any feature within the API, old or new, that meets their needs.
If you think I’m going to be going around telling NVIDIA’s partners and customers (or anyone using OpenGL) what features of OpenGL they should not be using, you are sadly mistaken.
Developers are free to use old and new features of OpenGL and they should rightfully be able to expect the features to interact correctly, operate efficiently, and perform robustly. Why would I (or they) want anything less than that?
I think it is wholly unreasonable to tell developer A that in order to use new feature Z, developer A is going to have to stop using old features B, C, D, E, F, G, H, I, J, K… (the list of deprecated feature is long) that have nothing to do with feature Z.
This isn’t to say that I want OpenGL to be stagnant. Far from it, I’ve worked hard to modernize OpenGL for the last decade. I wrote and implemented the first specification for highly configurable fragment shading (register combiners), specified the new texture targets for cube mapping, specified the first programmable extension for vertex processing using a textual shader representation, played an early role (and continue to do so) developing a portable, high-level C-like language (Cg) for shaders, specified and implemented support for rectangle and non-power-of-two textures, implemented the driver-side support for GLSL and OpenGL 2.0 API for NVIDIA, and more recently worked to eliminate annoying selectors from the OpenGL API with the EXT_direct_state_access extension. Before any of this, I wrote GLUT to help popularize OpenGL.
All in all, I’m pretty committed to OpenGL’s success. If I thought deprecation would make OpenGL more successful, I’d be all for it (but that’s entirely NOT the case). Instead, I think deprecation is on-balance bad for or, at best, irrelevant to OpenGL’s future development and success.
I’m really proud of what our industry (and the participants on opengl.org specifically) have managed to create with OpenGL. Arguably, source code implementing 3D graphics is MORE portable across varied computing platforms than code to implement user interfaces, 2D graphics, or any other type of digital media processing. That’s amazing.
But deprecation in OpenGL is an unfortunate side-show. It’s a distraction. It gives other OpenGL implementers an excuse for foisting poorly performing and buggy OpenGL implementations on the industry; they (wrongly) get off lightly from you developers by employing a “blame the API” strategy that places the costs of deprecation wholly on YOU rather than them just properly designing, implementing, and testing good OpenGL implementations.
Deprecation asks You All (the sum total of OpenGL developers out there) to solve Their Problem which is they refuse to devote the time and engineering resources to robustly implement OpenGL properly; instead, they blame the API and hope You All will re-code All Your applications to avoid the simpler solution of Them simply properly implementing their own OpenGL implementation.
Trust me, API size is NOT at the core of why these problem implementations are poor (go back to the four factors I listed earlier…). Attempts to “blame the API” for what are clearly faults in their implementation doesn’t fix any root causes.
As an OpenGL developer, rather than poorly utilizing your time trying to convert your code to avoid deprecated features, you would be better served sending a loud-and-clear message that you expect OpenGL to be implemented fully, efficiently, and robustly.