A quote from - Mark J. Kilgard • Principal System Software Engineer • nVidia

… the notion that an OpenGL application is “wrong” to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for “deprecation” (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn’t to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.

The truth is that modern OpenGL implementations are highly tuned at processing immediate mode; there are many simple situations where immediate mode is more convenient and less overhead that configuring and using vertex arrays with buffer objects.

http://www.slideshare.net/Mark_Kilgard/using-vertex-bufferobjectswell

//==================================================================================

This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90’s and has been doing so on behalf of one of the two biggest names in gaming hardware. With what that man said as a representative of nVidia, I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future so far as nVidia hardware and drivers are concerned. Now, I may be going out on a limb here by saying this but I suspect that AMD/ATI will be holding fast to this as well. My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.
One may also conclude from this that many other features of the OpenGL API that people are now afraid to use will not be going anywhere, nor should they.

Issues will arise for people that want to branch into mobile development if they are not careful with certain aspects the more “dated” API functions but it’s also very likely that much of what is currently available in the broad OpenGL API will become increasingly available on handheld’s, as their GPU’s and driver models become more sophisticated. On desktops, OpenGL is almost fully backwards compatible going back 15 years. This is true for ATI, nVidia and even Intel has been following this model as best they can with their little purse-sized computers.

I feel that it is safe to assume that there will be no functionality dropped from OpenGL anytime in the near future

It has already been dropped from core OpenGL. The only reason that the old stuff is still arond is GL_ARB_compatibility. This allows vendors to still support all the features in a single driver.

My logic is as follows, despite the lack of public statement on this matter from ATI representatives, we can safely assume that AMD/ATI are not going to give nVidia the upper hand by all of a sudden taking out features that they currently support and have always supported.

The actual safe bet is to simply use recent features. Although there’s no indication as to when or whether major vendors will finally drop the old stuff, I personally hope they’re eventually going to. On Linux, Intel does not expose GL_ARB_compatibility when you create a GL3.1 context - IIRC it’s the same for Apple and Mac OSX.

will become increasingly available on handheld’s

GLES2 has no fixed-function pipeline and no immediate mode. Neither does GLES3.

Personally, I’d like to see all of the indy developers that don’t have huge budgets or teams have all the tools they need to succeed with their visions. More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people. The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn’t necessary work as expected on a machine with a different GPU. It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU’s.

I can’t imagine why someone would want features striped out of an API just because this person does not care to use them. Personally I’m going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer • nVidia - Mark J. Kilgard. Those functions belong in there and what MJK says on this matter is likely the position of the entire development team at nVidia. I can’t imagine why people would push to remove these features when one of the lead programmers for a long-standing major GPU manufacturer is saying that this should not happen.
Wait, yes I can imagine why…

I know of one person who is pushing for this, and I also know that this same person is selling a book on the newer API’s. He likely views the tons of free open source material that’s available to everyone as his direct competitor. He wants people to pay him instead of being able to learn for free.

More supported functions gives people options for getting things running in a way that makes sense to them without having to fuss around with all of the compatibility issues that new, experimental API functions currently offer to people.

No, more functions means a bloated specification, result in more effort to implement that specification and makes it more effort to test and optimize.

The chances of all the new OpenGL 4.0+ features working in exactly the same way on all hardware are slim to none. Just when you get a new feature running on one machine you find out that it doesn’t necessary work as expected on a machine with a different GPU.

And who is responsible for making implementations behave as they should? That’s right: guys like MJK. Driver quality has always been an OpenGL problem - and you know why new features don’t get well tested? In part, because people like you, who are relentlessly clinging to legacy stuff, just won’t implement stuff using new features and thus cannot find bugs to report. Of course, even if you report bugs, there’s no guarantee they will be fixed, especially if you’re a hobbyist or indie developer. And even my company, which has good relations to both NVidia and AMD probably and is at the top of its field probably won’t have a shot - then again, we’re relying heavily on legacy code. A displeasing, but currently unchangeable fact.

It takes years for the GPU manufacturers to have things working consistently in regards to one another. It been this way right since the beginning of GPU’s.

Again, they can only fix bug that are found. A conformance test suite would help, but the ARB, and subsequently NVidia and AMD, don’t dedicate time and money to develop such a thing. Anyway, an implementation is a black-box for an OpenGL developer and we rely on vendors to do their job right. If they always did, your argument couldn’t even be brought up.

I can’t imagine why someone would want features striped out of an API just because this person does not care to use them.

Well, how about this for a reason: The ARB itself decided to do so - so decision was carried by NVidia and AMD. We’ve had this topic a lot of times here on the forums and the conclusion always was that legacy code paths might be as fast as, or faster than certain core GL code paths - simply because the legacy stuff has been developed for decades and has reached an highly optimized state. That doesn’t mean it’s good.

Personally I’m going to continue to agree with that fellow who holds the title of, once again, Principal System Software Engineer

In daily business, API refactoring, deprecation and removal is common - at least in a code base that has existed for over a decade. The reason is simply: at the time of conception, decision that were made might have made sense. If those reason don’t exist anymore and using the API is cumbersome, not future-proof, or prone to errors it should be revamped.

Immediate mode is such an example IMHO. It was ok at the beginning but could be replaced with vertex arrays and VBOs fairly early. In general, sending a bunch of vertex attributes over the bus every time you render something is simply idiotic - especially if we’re talking complex models of which there might be hundreds or thousands per frame. BTW, MJK says the same thing. The example he uses, a rectangle (or more generally “rendering primitives with just a few vertices”), is only valid for simple prototyping IMHO. Probably every rendering engine out there encapsulates state and logic for simple primitives in appropriate data structures, so uploading a unit quad to a VBO at application start-up isn’t really problem once you’ve written the code.

The convenience argument is simply not good enough to defend immediate mode. The debugging argument is kinda ok - however, if you know what you’re doing and have some experience, VBOs and VAOs are not hard to debug either. The performance argument is simply not valid. You cannot compare code paths which have not been tweaked and tested roughly the same amount.

EDIT: BTW, nowadays, where scenes consist of hundreds of thousands to millions of polygons per frame, wanting to keep immediate mode around, among other things, for a few simple primitives is simply hilarious. The same goes for fixed-function lighting - if someone’s too incompetent to come up with a simple Gouraud shader if desired, they should just give up OpenGL altogether.

I don’t use legacy code. I was defending the rights of people who do. It’s interesting that right after you made this demeaning comment about me, you went on to say that the people you work for are still using legacy code. So basically you are saying that I have the same point of view as the people who you are subservient to.

One could logically assume that since you don’t want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe. You said [QUOTE=thokra;1252358]And even my company, which has good relations to both NVidia and AMD probably…[/QUOTE] you were stretching things a bit here. It is not your company, they hired you, and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.

you made this demeaning comment about me

It wasn’t meant to be demeaning. Granted, it might have sounded a little harsh. Still, that doesn’t make it untrue.

I don’t use legacy code. I was defending the rights of people who do.

But the people who do shouldn’t do so anymore, if possible. If they’re constrained by other business related factors, I’m the last person to accuse them of not going the extra mile. Still, a core driver and a legacy driver would be a much better solution IMHO. People who still need or, even if that doesn’t make any sense to me, want to rely on legacy GL, they could do so with a legacy driver. However, I’m perfectly aware that it would put pretty much of a burden on the guys at NVIDIA and AMD. Thinking about it, if all vendors actually agreed on simply dropping support starting on day X, what are people going to do? Rewrite their whole rendering code in Direct3D because they’re pissed off about the disappearance of legacy support? I don’t think so. Breaking backwards compat is never a fun thing but sometimes I think it’s necessary to take software to a higher level.

So basically you are saying that I have the same point of view as the people who you are subservient to. One could logically assume that since you don’t want legacy code being used where you work and since it still is being used there, that you do not have the sway or power that you would have us believe.

Nope, any technical novelty is pretty much embraced in principle around here. It’s just the lack of time or fear to alienate customers that keep us from implementing them. Still, if I were asked to take a stand, I would take the same position as above - even to the people I’m subservient to. The fact is, I know that there’s no room for improving this at the moment and yes, I’m in no position to demand we rewrite our whole rasterization code. However, that doesn’t mean is wouldn’t be a good idea.

and just like all the other people that you have demeaned and belittled such as the nVidia and ATI engineers, those people that gave you a job actually do know better than you despite your belief to the contrary.

Now that’s just funny. Where did I demean any engineer? Does disagreeing equal demeaning now? I didn’t state nothing that isn’t true - if you disagree, feel free to have at me. And the people that hired me gave me a job in part because I have pretty solid understanding of modern OpenGL. And the fact I call it “my company” is simply a testament to my liking my job and identifying with my employer - not due to the fact I believe it isactually my company. How could anyone misunderstand that?

It’s important to remember that Kilgard is viewing the world through NVIDIA-coloured glasses; of course NVIDIA would like it best if everyone wrote programs that worked best on their hardware (and the fact that they have a highly tuned immediate mode implementation going back to the last century means that this is one area they would support the continued use of) but that’s not necessarily in the best interests of either developers or consumers. His technical credentials may well be impeccable, but he’s still biased.

For a fairly good idea of the kind of driver complexities that can arise from continued support of immediate mode, have a read of this: http://web.cecs.pdx.edu/~idr/publications/ddc2006-opengl_immediate_mode.pdf. The actual direct topic of the document is not really relevant, and some of the points it raises (particularly wrt glMapBuffer, “array state containers” and instancing) are now outdated, but it does a great job of describing many of the weird corner cases and abuses that drivers need to deal with (and must support flawlessly because the GL spec requires it) when implementing immediate mode. Never mind consistent support of GL 4.x features; GL 1.x on it’s own is a nightmare landscape of bear-traps and unexploded landmines.

This is exactly the problem that deprecation/removal sets out to solve. I don’t know about you, but I’d certainly prefer if driver writers spent their time working on the stuff that really matters for a modern application rather than dealing with this kind of rubbish.

It’s incredibly disingenuous to imply that drawing without immediate mode falls into the category of “new, experimental API functions” - vertex arrays have been available in core OpenGL since version 1.1 (1997!) and as an extension prior to that, VBOs in core since 1.5 (2003!) and likewise as an extension since before. I hope you didn’t mean to give that implication, but it sure read that way.

Regarding dropping of other (or even all) legacy functionality, this is one of those theoretical objections that frequently come up but that don’t even exist in the real world. I can say that with extreme confidence because a working real-world model of discarding legacy functionality (and even of completely throwing out the old API and redesigning a new one from scratch) already exists, is used, is popular and is proven to work in the field. It’s called Direct3D (the fact that Direct3D drivers can be orders of magnitude more stable than OpenGL drivers just supports the assertion that this approach works). Seriously - this is a solved problem - you’re just wasting your own time raising it as an objection.

Everybody wants OpenGL to evolve and improve, but clinging on to old rubbish that hinders that evolution and improvement is not the way to go about it. OpenGL didn’t lose the API war through shenanigans; it lost it through design short-sightedness, through letting the hardware get ahead of the core API’s capabilities, through squabbling in committees, through not giving developers features that they needed, and through fragmentation due to multiple vendor-specific extensions for doing the same thing. Wanting to retain legacy features at the expense of moving things forward (especially at a time when it’s position could be strengthened again as Microsoft seem to be completely losing the plot with the two most recent evolutions of D3D) isn’t being helpful.

EDIT: “sarcasm has been removed, now this post is mostly gone”

This “war” has been almost completely one-sided and it has been Microsoft behaving this way. Well, Microsoft and people in forums bickering about which API is better.

OpenGL is not going anywhere and it’s only getting better as everyone’s drivers become more robust and diverse.

Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc… Just to name a few big hitters who are all firmly in the scene.

OpenGL ES 2.0 marked the first step towards the tomb of OpenGL if such a thing is even possible.

I am not concerned about myself as a developer, I have no problem at all with VBO’s, VAO’s, IndexBuffers, FBO’s or even building a unique shader for every model I build. My run-time only uses these things.
I am not at all concerned about having to put together a custom matrix math library, I’ve already done that.

I am concerned about all the aspiring indie-developers that show up here hoping to have a quick easy start-up system that will bring them years ahead of the game. There are a lot of kids out there and even stay-at-home dads who want to do this, and now they have an extra 2-3 years of learning curve to deal with. This goes against the entire spirit of the free-to-learn open source community which has libraries upon libraries of free research material available for download.

Being able to access fixed-function in GLSL shaders is what makes OpenGL the best choice for beginners. To say otherwise is absurd. This feature puts shader programming into the hands of children, some of the more gifted ones anyways. Most of the people that show up here will not be able to do all these things on their own if OpenGL is gutted any further.

For people that are just starting out to have to not only learn to use a matrix math library but to also have to implement that library by hand is absurd, now combine this with having to learn all the various subtleties of passing variables and matrices to the GPU, things can soon become overwhelming for people who are new to all this.

There are a lot of people in this world who want to make a game, many of these indy games will enrich our lives; As more and more features are stripped from the OpenGL API, this dream for many people will fall further out of reach. Not only will we have lost variety, which is something that nurtures and encourages creativity but we will also have lost the treasure trove of information that has been amassed over the past 15 years.

I am concerned about all the people that are not going to have 5 years doing of things the easy way, before they have to jump into the deep end and learn to do it all themselves in a more efficient manner.

If you want an API that is constantly being gutted and rebuilt then go over to DirectX, Microsoft will love it, you’ll be helping them to black-ball people into buying the latest operating system that they are selling.

So far as “modern” OpenGL goes. It is incredibly absurd to pack a cross-hairs model, which only consists of two or three line segments into a VBO with indices when immediate mode can be set up to do this almost instantly and with much overhead. The set-up alone makes this impractical. The run-time code overhead makes doing this impracticable.

Also, in the case of drawing bounding box outlines for visualizing and diagnosing collision detection algorithms, immediate mode is the only proper choice. Anything else would be bug-prone, over-done fluff.

Mark J. Kilgard was right when he said that people need to be educated on the proper uses of these easy to use, and powerful tools, people should not be told that they are wrong to use them.

This is like telling someone that they are backwards hill-billy’s because they happen to own a hand-saw. Electric saws may be the choice for most situations but they are not necessarily the best choice for every situation.

If you want a quick easy start-up system, you use an off-the-shelf engine such as Unreal, Unity, etc.

More like an extra 2-3 weeks. If it takes you longer than that to transition from compatibility to core, you aren’t ready to be making commercial games (note: “independent” doesn’t mean “amateur”).

Being able to access fixed-function from a shader just means using a separate function for each variable rather than using glUniform() for everything.

The main advantage of the compatibility variables is the ability to have most of your client-side code work the same way with or without shaders, so it’s easier to write code which uses shaders where available but still works with 1.x.

If you understand matrix math and can program, you can already implement most of the library and it shouldn’t take more than a few hours (the actual matrices for rotation, scaling, perspective etc are all given in the online manual pages). The only bit that’s even slightly complex is matrix inversion, which is only required for the normal matrix (assuming that your modelview matrix isn’t orthogonal) and gluUnProject().

One of the main reasons why the matrix functions were deprecated was that they were largely pointless. For most real programs (i.e. not red-book examples), you need the matrices client-side for e.g. collision (using the OpenGL functions then extracting the matrices with glGetDoublev(GL_MODELVIEW_MATRIX) etc is somewhere between bad and horrendous in terms of performance). So you end up writing your own matrix functions anyhow (and not necessarily the same ones which OpenGL uses, e.g. rotation matrices are more likely to be generated from quaternions than from either Euler angles or axis-and-angle).

Someone who can’t do this much for themselves is going to spend up to a week posting on the forums effectively asking for personalised tuition on everything from animation to parsing file formats to physics before realising that making a game is a few orders of magnitude more complex than they bargained for, and promptly giving up.

Personally, I’m a firm believer in encouraging new people to use the most painless, easy to use and configure API features available and they will have the best chance of succeeding, especially if they don’t need fancy-pants methods.

Except for the dangers of unavoidable situations like the OpenGL ES 2.0 spec which severed the incredibly valuable link between the old and the new. Those devices support both, however those machines have such tight memory space and bandwidth constraints that this unfortunate situation is understandable and necessary.
Yes it is absolutely absurd to expect mobile devices to have 100 MB+ drivers packages that would allow for a robust and fully-featured OpenGL environment… for now!

In the future this is likely going to happen and they will soon all be able to give the Dynamic Duo of desktop machines a run for their money in shear diversity of API combinations available.

Please don’t get me wrong here! Bang, exclamation point. In no way shape or form should any present or future development be made on 'immediate mode." That would be like carpenters investing time and money into developing new types of screws. There would be no point.

The nVidia documents that stongly indicates that no legacy features will be removed also state that no future consideration will be given to them. They have already been optimized and tested and refined. They will take up nobody else’s time. There is no concern that research time and effort are being wasted on that stuff. They are not reinventing the wheel over and over again with legacy code. That legacy code ran on machines that are nothing but pocket watches compared to machines today. There is no way that stuff is running slower now than it did on crappy, old machines.

Legacy has not caught up in sheer raw, large scale performance, so what, why use a car in a situation where a bicycle will do?. Legacy will not go anywhere unless it is specifically conflicting with modern functions. Why should it? It takes me 5 minutes to download the absurdly large driver packages.

If you want to eliminate bugs from your code, the best way to do it is to always test your software on as many GPU’s as possible, as often as possible. I keep two old junk laptops on hand for this very purpose. One is a very old, and very weak, x1150 mobile Radeon that my friend’s girlfriend spilled juice on. The drivers for that machine are buggy to begin with. I know that if it runs on that machine then it will run on almost any computer that is newer than 5 years old.

I also keep a mobile Intel GPU machine that was made right when Intel finally caught up with ATI/nVidia shader model 2.0/3.0 hardware. I also know that if it works on this then it will work on everything without any fear of bugs creeping in on someone else’s computer.

I also test using the WINE emulator on a regular basis when I’ve been making substantial changes using features that behave differently under different circumstances.

Using newer features such as floating point textures is a poop-field so far as truly cross-platform goes. What works beautifully on some cards cannot be implemented properly on another made by someone else.

To resolve this issue, we all have to work together to build a cheat-sheet that has input from hundreds of people that has all been tested on hundreds of machine configurations. Either that or we have to wait for the various manufactures to play catch-up with one another. We can wait for them to do it or we can do it ourselves. Then people will be able to use it safely and reliably, and the GPU maunfacturers will have clear, documented evidence that will help them eliminate bugs in their drivers and circuits.

OpenGL 4.0+ currently has a big problem since a five stage shader and all the accompanying features have a lot of kinks to be worked out. Most people that come here do not have huge teams of software designers and testers at their disposal to get this working consistently across many platforms.
We have to do this ourselves or there will once again be huge repositories of bug ridden code several years from now.

It would be better if we make listings of people’s efforts with trial and error. Under many different circumstances.

For instance, “Which newer extensions are giving people problems, and on which machines?”
and also, “Which of the newer extensions are known to work consistently on all available platforms?”

If we create a repository of these basic facts then we will have helped to resolve this issue of GPU manufacturers being unwilling to share results with one another.

Bug testing something as seemingly simple as a floating point texture is out of the reach of most people since there are no reliable threads where people have listed what is working for them and what is not. The old standard of people listing their machine specs has all but disappeared. That’s probably a good thing since I used to think these forums were nothing but horrible hardware and API flame-wars. This has changed a lot over the years and places like this have become more civilized and productive… usually.

After not bothering with forums for half a decade I can now honestly say that they are now doing people some good. Arrogant, demeaning attitudes have toned down a lot. This is a good thing. Learning this stuff will do a person no good if they are also learning to act like a condescending, arrogant know-it-all
jerk at the same time.

What we do not have is a comprehensive list of what works and what does not. We need a bug list, a cheat-sheet that spans back over 15 years of people’s experience with OpenGL, the new and the old. All updated, current and with nearly bug-free solutions because it’s all of our combined experience with these various API changes and additions over the years.

We have to do this ourselves, if I were to share some known bugs and pitfalls for beginners that none of them would likely find written anywhere unless they already knew what to look for, then it would be something the following.

It would start a “benefits and bugs” wiki page that looks something like the following.

//-==================================================================
Section-> (Fixed-function tied to GLSL)
//------------------------------------------------------------------------------------------------------------------------
PRO’s: Very easy to use. A beginner could write and configure animation and lighting shader’s very easily.
//------------------------------------------------------------------------------------------------------------------------
CON:( Not yet possible on mobile devices!)
//------------------------------------------------------------------------------------------------------------------------
CON: Most source code for this style was written when only a few driver models were automatically performing casting.
There is a lot of very interesting source-code from that era, but even to this day, all of those horrible casting errors are still tripping
up a lot of the new player’s in the GPU arena. Driver’s on mobile devices can’t handle that much crap being thrown at them.
//------------------------------------------------------------------------------------------------------------------------
PRO: The Dynamic Duo are champs at fixing these problems with very little overhead.
//------------------------------------------------------------------------------------------------------------------------
Specific Bug listing A_1: gl_frontmaterial.shininess will not yield consistent results across many GPU’s. Apparently, different manufactures are using a different procedures behind the scenes for this one. It’s the only one I’ve found to be unreliable for cross-hardware/platform of all the common ones.
//------------------------------------------------------------------------------------------------------------------------
CON: Any time that a fixed-function material or lighting variable is used in GLSL: All possible fixed function material and lighting parameters available to GLSL will be added to the compiled shader even if they are not all being used. This is not as bad as it sounds, this method was working reasonably fast enough back in the Radeon9800/nVidia FX days so it’s not going to slow down something made in the last few years. It’s not practical for mobile devices yet but will not trip up a modern machine in the least. Not so far as most people go in their first several years. There are bigger fish to fry.
//------------------------------------------------------------------------------------------------------------------------
PRO: Learning to pass in your own Uniform variables is an easy enough optimization to consider once you’ve finally gotten your feet wet and you are not feeling so overwhelmed.

//-=================================================================================================================

If the manufacturers will not give us a modern , up-to-date, fully backwards compatible bug repository then we will have to do it ourselves.

Once a format for a wiki like this is decided upon, these bits of accumulated ‘wisdom’ can be posted to a wiki so people browsing the free repositories are not constantly stepping in poop.

Just think back to when you had various successes and problems with all the different methods over the years. Give the good and the bad, how did using a feature make your life easier as a beginner? How did you overcome the pitfalls that you ran into? Things like this built into a wiki will make OpenGL a force that will knock people’s socks off, but only if it includes everything OpenGL from beginning to end, 15 years of backwards compatibility that should become rock-solid stable and easy to learn.

If I see any of it, I’ll cut and paste it to a file that will eventually turn into a posting that can be attached to all the links of legacy open source code. Now that stuff won’t be broken anymore. People will have instructions on how to fix it all when they use it.

All those older pages should not be thrown away, they could be made productive and useful again, with little effort on our parts.
//-----------------------------------------------------------------------------------------------------------------------------------------------
Tip: For a lot of GPU’s, even today, 1 and 1.0 are not the same thing! Don’t rely on the driver to fix that for you. Certainly don’t expect the shader to always work if you ignore this because it happens to work on your computer.

Capacity isn’t the issue here. The issue is that mobile devices don’t have any legacy code to run, so there’s no need for them to support the legacy API.

Herein lies the problem. While the legacy API as a whole may not disappear, you are increasingly going to be faced with an either-or choice. You can use the legacy API or you can use the new features, but not both.

Apple have already said that new features will not be added to the compatibility profile context, so if you want to use them you need to use a core profile context, where immediate mode and the fixed function pipeline don’t exist.

Another issue is that interactions between newer features and the legacy API are frequently resolved in ways which make use of both impractical. E.g. newer features may have state which can’t be pushed and popped, so frameworks which rely upon objects’ render methods restoring the state preclude the use of newer features. Newer features may not be usable inside display lists (e.g. instanced rendering is prohibited).

Every single last mobile device supports OpenGL 1.1. Almost every device in use has this general capability, people have been even been using this style of code to program HomeBrew apps for the Wii for many years now. Not everybody is interested in making games that attempt to look like big-budget CG movies. This misinformation is exactly like the flaming arguments that have people fighting over whether DirectX is better than OpenGL or whether or not AMD is better than nVidia. People fight over whether or not C# is better than C++.

Now this nonsense has turned into an OpenGL vs. OpenGL bicker fest. There is room for all of it and to teach people otherwise is wrong.

No, they don’t. Mobile devices use OpenGL ES which is a completely different API which just happens to be modelled on OpenGL. ES 1.0 and 1.1 are not comparable to OpenGL 1.1, no matter what you may wish to believe.

You made the same mistake earlier on when you said:

This is misinformation. This is FALSE. This is damaging misinformation that is as bad as the infamous Wolfire blog post because all it does is serve to perpetuate lies. You’re undermining your own argument because you’re showing yourself up as someone who’s prepared to use lies as a prop for that argument. If you want your position on this to be taken seriously you really need to stop doing that now.

Also:
[ul]
[li]No version of OpenGL ES supports glBegin/glEnd; vertex arrays must be used.
[/li][li]OpenGL ES doesn’t have a compatibility profile. 1.x has a fixed-function pipeline, 2.x has shaders, and never the twain shall meet (i.e. you can’t mix the two).
[/li][/ul]
WebGL is based upon OpenGL ES 2.x, i.e. it has no fixed-function pipeline. Additionally, it doesn’t support client-side arrays (anything which can use a buffer object in desktop or embedded OpenGL must use a buffer object in WebGL).

This is the kind of person that you have proven yourself to be. You have just lied to everyone here by intentionally misquoting what I said.

You said

Then I responded

[QUOTE=mhagain;1252456]OpenGL is not going anywhere and it’s only getting better as everyone’s drivers become more robust and diverse.

Mac, iOS, Android, Linux, PS3, Windows, Blackberry, WebGL, etc… Just to name a few big hitters who are all firmly in the scene.[/QUOTE]

You said that OpenGL lost a “war” and I responded that OpenGL is not going anywhere.
I did not say anything about those devices supporting immediate mode.

Now I’m going to address your lying by quoting you

You clearly stated that

which is not even partially incorrect, this is outright wrong. Yet you don’t make any effort to show that you’re not sure about it, like prefacing it with something like “I think”. Well, I think everyone would take that as an intentional statement. An intentional statement which conveys false information is, by definition, a lie. Thanks for playing.

Do you have any transitive thinking capabilities? Ok, let’s work this out. You said OpenGL, which you consider to be used on the following platforms

isn’t going anywhere. Looking at the list, I see at least 3 mobile platforms in there - don’t know what subset of OpenGL the famous “etc.” uses but what the hell.

Now, since you mentioned at least 3 mobile platforms and you stated that

So you imply that immediate mode is supported on those devices due to the fact that it’s a OpenGL 1.1 core feature. This directly contradicts you saying

No, it hasn’t. Everyone participating here is unsupportive of your claims. Normally no one should give a brownie about such idle ramblings, but I for one regularly get pissed off at people trying to head back to the early- to mid-90s. All you get is more code and higher complexity which, to quote Bjarne Stroustrup, simply lead to “more bugs”. If you really love legacy OpenGL, you have to let it go man.

The PS3 does NOT use OpenGL.
There is an OpenGL|ES wrapper but no one in their right mind touches it because it’s too slow.

I do wish people would stop repeating this incorrectly…

At the risk of… something I am going to add my 2 cents.

Here goes, on the deprecation stuff:
[ol]
[li]Removal of immediate mode is in general a good thing; the only loser is those getting started with OpenGL… the removal just makes the getting started with OpenGL more of pain now[/li][li]Removal of fixed function pipeline is borderline. The basic mentality to this is that chances are an implementation of the fixed function pipeline made by the vendor will likely be better than doing it via shaders. Additionally, for a large number of situations the fixed function pipeline gets the job done. On the other hand, the interface for multi-texturing in the fixed function pipeline is quite awful so I am glad to see it gone in addition all the state associated to the fixed function pipeline was a pain too.[/li][li]Removal of QUAD primitive types was, IMO, a mistake. One can simulate it with a geometry shader, but that seems awfully silly. As a side note, OpenGL ES3 does NOT have geometry shaders.[/li][li]Removal of client side arrays (i.e. non-buffer object backed index and vertex buffers) was IMO a mistake as well. The use case of vertex and index data changing from frame to frame got ickier. With client side arrays, the GL implementation did the right thing for you. Now we play, as Dark Photon has called it, buffer object Ouija board for streaming the data. As a side note, OpenGL ES2 and ES3 DO allow for client side arrays.[/li][li]glLineWidth… this was weird. It was marked as deprecated but it is not removed. I am grateful it was not removed, but well…[/li][li]Removal of display lists was not done correctly in my opinion. My reasoning is simple. With a display list, one could define and queue up rendering sequences easily. In an ideal world, the GL implementation did magicks to optimize it. That was great functionality. What did suck was how those commands (display lists) were defined, what would be great is a replacement for them.[/li][/ol]

In general I agree with kRogue’s comments here, but do differ on a few points.

Immediate mode served a purpose other than just as an easy entry-point for learning. It was great for rapid prototyping and proof-of-concept work. Even in a scenario where the more traditional immediate mode is removed, I would have liked to have seen glBegin/glArrayElement/glEnd retained (immediate-mode indexing - yayyy!)

FFP is emulated by the driver via shaders in almost all hardware for close on 10 years. Some elements of FFP however remained useful (I’m thinking primarily fog here) and removing of them just made exponential shader explosion even worse. ARB assembly programs had the right idea.

Quads should have stayed.

Client-side arrays should have stayed (but via glVertexAttribPointer with the old glEnableClientState removed). I’m detecting a bit of “D3D envy” in the removal of these (and speaking of “D3D envy” it’s tragicomic that in 2013 OpenGL still doesn’t have a dynamic buffer object updating API as good as D3D’s - no more driver hints! - give us explicitly requested behaviour that you’re guaranteed to get instead, please - D3D has had this problem solved since 1999, for crying out loud, there’s no need for modern GL to be so over-cautious about it).

Some hardware doesn’t accelerate lines > 1 wide. Deprecating but not removing seems both a concession to that hardware and a cop-out for hardware that does.

Display lists were far too complex in the old API, with lots of weird edge cases and fiddly rules about what can and cannot be put into them (also refer to the document I posted on the previous page for some lovely examples of interaction between display lists and immediate mode). Agreed that a clean replacement for them would be nice.