Display lists in 3.1

It is simply a ridiculous proposition that we would require civil engineers, mechnical engineers, aerospace engineers etc. to not use modern graphics card in order to use their existing software.

Um, exactly how would this be requiring them to not use OpenGL? What they would be required to do is change their applications. You’re not losing any functionality; you’re just streamlining the API.

Now, since Kilgard’s position is NVIDIA’s position, and NVIDIA owns enough of the GL market to be able to kill any proposal they don’t like, deprecation is effectively dead as an API cleanup tool. No one will ever, ever write a purely core API. The most deprecation is is a guide to API paths that will actually be performance-friendly.

This is the second time OpenGL missed an opportunity to build a better API. They won’t get a third one. So you can expect fixed-function, display lists, immediate mode, and any number of other poorly thought out features to be supported in perpetuity on OpenGL.

Which is also why OpenGL will be trapped forever on desktops. Any embedded devices will be using OpenGL ES. Which is much like what GL 3.0 ought to have been.

If a customer had an existing software that uses “deprecated” OpenGL feature and then happened to upgrade his hardware, he would expect his old software still working on the new hardware. Of course, he could call the application developer to upgrade the software. But then the application developer could choose not to due to such reason as economic consideration.

Nothing in OpenGL 3.2 is preventing you to write new software using the pure, core OpenGL feature only, if you choose to. Bear in mind that successful software needs to reach the maximum number of customers. For some software, it is not practical to require all its customers to use very new hardware (such as new game consoles) with which OpenGL 3.2 is available, unfortunately.

I think an analogy can be found in C++ programming language. Some people prefer to use STL’s vector, others may find raw array easy to use. We can not say since STL’s vector is better designed, we should deprecate the “new” keyword. There was once a time when some zealots think Java should replace C++ because Java is a pure objected oriented language. But the beauty of C++ is it does not impose a certain programming pattern on the part of programmers. It is one of the reason why C++ is successful. A person should not be just looking at his own narrow application area and then try to impose his narrow views on others.

If a customer had an existing software that uses “deprecated” OpenGL feature and then happened to upgrade his hardware, he would expect his old software still working on the new hardware.

Since when? There has never been any guarantee that any old software will run on new hardware with no modifications. There are plenty of examples of old code that simply doesn’t work on new hardware.

Furthermore, that’s not what was ever being discussed. Even in the full API rewrite land of Longs Peak, the old GL 2.1 would still be there as a legacy API. You simply couldn’t intermingle them, so if you wanted to use LP features, you had to use LP in full. If you created a Longs Peak context, it was a Longs Peak context and exposed LP functions. If you created a 2.1 context, it was a 2.1 context and exposed 2.1 functions.

I think an analogy can be found in C++ programming language. Some people prefer to use STL’s vector, others may find raw array easy to use. We can not say since STL’s vector is better designed, we should deprecate the “new” keyword.

This is a pretty terrible analogy, as the “new” keyword is used for things other than raw arrays.

But the beauty of C++ is it does not impose a certain programming pattern on the part of programmers.

Of course it does. Are functions first-class objects? No. Ergo: minimal functional programming at best. The mere fact that it is statically typed means that imposes a programming pattern on the programmer.

Of course not. The entire idea of the deprecation mechanism was based on flawed thinking. You can’t have full backwards compatibility and a streamlined modern API using the same interface. The only way to solve this would have been a fresh start with a modern API that got rid of all the inconvenient ballast OpenGL is saddled with.

Just removing the obsolete features and declaring the rest ‘core’ is not going to do it. My major issue with the core is that even though it got rid of a lot of useless stuff (along with some I’m sorry to see gone) it didn’t do anything to make the remaining features easier to use. It’s still the same old and atrocious API that has been bothering me for 10 years now. So for most programmers there’s just no motivation to switch. 2.1 + extensions is mostly as good as 3.2 but has the big advantage that it’s much easier to target both modern and old hardware with the same code base.

I have to disagree here. Put a lightweight wrapper, and GL3.2 can look like DX10 if you want. Vtx-attrib declaration strings, that your GL3.2 path can bind to general attribs, and 1.5-2.1 can bind to fixed-func attribs; uniforms that you compute with your fav math-lib, and upload in whatever fashion is optimal for 3.2 vs 2.1 vs 2.0 vs 1.5; VBOs that are available on all cards, etc. It’s certainly as rosy as DX8/9/10 transitions.
Like Alfonse wrote recently elsewhere:

To me the fundamental flaw in OpenGL’s design is that everything has to be bound to the system before being used. As a result it’s not possible to write something that reliably can manipulate objects.

Therefore any abstraction layer placed on top of it will suffer some problems, especially if you have to work with third party code you have no control over you’ll be in trouble.

Something like ‘why the hell does this not use the texture I want it to use?’ And then, after scratching your head for month after month why your code is not working you realize that you depend on code that doesn’t play nice. Well, I was there and it’s not nice.

So bottom line, you can abstract the API all you want but what you can’t abstract without getting inefficient is its completely outdated design.

… and now take one guess what the average programmer working under tight time constraints will do. Right! He’ll chose an approach where he does not need to duplicate code for everything so he will most likely implement GL 2.1 only with some extension checks for modern cards. Why should he go the GL 3.2 route where he needs to do everything with shaders? It’s just more work. All the nice convenient fixed function stuff is still there and a significant portion of any application does not need shaders. Why waste work on them if he can code one path that fits all hardware without suffering any performance loss? From an economical standpoint going 3.2 would be a waste of valuable time. The stuff that’s really useful is all available as 2.1 extensions - with the added advantage that it can be combined with what was deprecated.

Therefore any abstraction layer placed on top of it will suffer some problems, especially if you have to work with third party code you have no control over you’ll be in trouble.

So, what you’re talking about is what happens when you call code you don’t control that uses OpenGL. That code should be part of your abstraction. Anything that does rendering should be part of the abstraction.

Why should he go the GL 3.2 route where he needs to do everything with shaders?

Um, have you been even reading the thread? Deprecation is dead! If you want to use display lists or fixed function or whatever else, it is still there!

Name an implementation of GL 3.2 that does not include the compatibility profile. Just one of them.

Very funny! Tell me, how am I supposed to (efficiently) abstract code that I only have in binary form and where I can’t even tell what exactly it’s doing? All I know it that it doesn’t play by the rules so I had to put it in a thick and very clumsy wrapper to use it.

[/QUOTE]

Um, have you been even reading the thread? Deprecation is dead! If you want to use display lists or fixed function or whatever else, it is still there!

I know. But I was talking about Core. Since deprecation is dead, what motivation is there not to use the old features (a.k.a. GL 2.1 plus extensions)?

Name an implementation of GL 3.2 that does not include the compatibility profile. Just one of them.

Let me guess: None! Nobody can afford to drop the old stuff. Which makes the core profiles an exercise in pointlessness. The entire thing was so ill conceived that it was doomed to fail from the first time it was mentioned.

Very funny! Tell me, how am I supposed to (efficiently) abstract code that I only have in binary form and where I can’t even tell what exactly it’s doing? All I know it that it doesn’t play by the rules so I had to put it in a thick and very clumsy wrapper to use it.

For “telling what exactly it’s doing”, GLIntercept is a reasonable solution. That’s orthogonal to abstracting.

As for how to abstract it, it’s simple: no code outside of your abstraction layer may call this code. See? Your abstraction would have a function like “DoThatThingTheBinaryBlobDoes()”, and the details would be handled in the implementation of that function. It would set the appropriate GL state, call the actual binary library function, and restore the GL state as needed for the rest of the abstraction to work.

Since deprecation is dead, what motivation is there not to use the old features (a.k.a. GL 2.1 plus extensions)?

2.1 + extensions is not the same as 3.2 compatibility. Indeed, I imagine if you’re on a 3.2 compatibility capable implementation, you can’t get 2.1 at all unless you specifically ask for it. The implementation is free to give you 3.2 compatibility.

The entire thing was so ill conceived that it was doomed to fail from the first time it was mentioned.

It was only doomed because NVIDIA doomed it. They decided to support compatibility profiles in perpetuity, and that’s the end of it.

If NVIDIA and ATI had said, “We’ll support GL 2.1, but all new stuff will be limited to 3.x core,” it would have worked. ARB_compatibility and the compatibility profile are what killed it.

That’s what I’m doing but that’s precisely what shouldn’t be needed as it makes the code inefficient (meaning in this particular case that I have to restore the entire texture state for up to 8 texture units each time I call such a function.) You can abstract shitty design all you want but there’s always points where the [censored] bleeds through to bite you in the ass. And you can twist it all the way you want but I call a system that maintains one global state for the entire application badly designed. I admit that in ancient times it may have seemed convenient not to carry around all those pointers but in the end it was still a bad idea.

It was only doomed because NVIDIA doomed it. They decided to support compatibility profiles in perpetuity, and that’s the end of it.

Well, you can see this from different viewpoints. It was a futile endeavour to try to bring the current OpenGL API up to date without changing the fundamental basics of how it works.

I can’t say who is to blame that instead of a real upgrade to something modern we got this half-assed changes.

Instead of a new API that’s designed to work with modern feature sets all we would have been left is the same old and outdated system just with less features. The bad decisions that were made when the modern features were first implemented were not addressed by all of this (like the stupid hint system to tell how a VBO is to be used, for exmple.)

So essentially GL 3.0 core was just 2.1 plus making a few common extensions core minus lots and lots of convenience. This may be something to get a few geeks excited but to any real world programmer such a system is not attractive as he would most likely stick to 2.1 plus the already existing extensions.

The thing is, if you need a fresh start, do a fresh start - even if it means changing fundamental design paradigms. Yes this would have resulted in an incompatible API but hey, does it really make a difference? 3.x core omits so much of the old functionality that it’s mostly impossible anyway to port code straight over.

So instead of truly dumping the baggage, including the design flaws that are inherent in the API they just decided to mark some functionality obsolete but did nothing about the other issues (like the global application state for example.) So instead of a clean and modern API all we got is a stripped down version of the same we already have and that hardly serves as motivation to migrate. You gain nothing from doing so.

Well, I guess that’s what you get if decisions have to be made by comittee. Since you can’t satisfy anyone the best course of action is to do nothing.

So to boil it down from my point of view, GL 3.x core contains all the mess OpenGL implies but none of the convenience that previously made up for it. No, thank you, I’d say.

If NVIDIA and ATI had said, “We’ll support GL 2.1, but all new stuff will be limited to 3.x core,” it would have worked. ARB_compatibility and the compatibility profile are what killed it.

Possibly. I doubt it. We still would have been saddled with an API that was only brought into the future half way, not to mention that much of the new stuff would have been added to 2.1 as extensions which would have resulted in the same situation we are in now. So in my opinion it was inevitable that the current situation happened. I knew it from the moment the 3.0 specs were presented that it wouldn’t work out.

That’s what I’m doing but that’s precisely what shouldn’t be needed as it makes the code inefficient

First, glBindTexture is not necessarily inefficient. Especially when you’re binding texture 0 (aka: unbinding textures). Binding things does not imply the desire to render with them.

Second, yes, OpenGL requires that you be responsible for the use of the API. That means that you need to be responsible for all use of the API, even usage that you have decided to cede responsibility over to a third party library. You made the choice to use a binary blob library with no source code access, one that makes no guarantee as to what state it is and is not changing. And therefore, you must take responsibility for your choices.

meaning in this particular case that I have to restore the entire texture state for up to 8 texture units each time I call such a function.

You have piqued my curiosity. What would make you think you need to do that?

It’s been a long time since I did any texture environment fixed-function work, but in the land of shaders, it just doesn’t matter. You bind a program, and you bind the textures that this program uses. If there are some other texture units with textures bound to them, nobody cares; it won’t affect the rendering. And if those texture units did have something bound to them, it likewise does not matter, as you will be binding the needed textures for this program.

The only time I could imagine needing to clean out texture unit state would be if you bound a program, did some rendering with it, then called some function that does arbitrary unknown stuff to the texture state, and then wanted to keep rendering as if the unknown stuff had not happened. And even then, you only need to clean out the texture state that the program was actually using.

You can abstract shitty design all you want but there’s always points where the [censored] bleeds through to bite you in the ass.

Absolutely not. If you have an abstraction that allows the underlying implementation to bleed through, this is the textbook definition of a bad abstraction. The whole point of an abstraction is to have the freedom to change the implementation without affecting the interface.

BTW, I think you mussed a [censor] point.

like the stupid hint system to tell how a VBO is to be used, for exmple.

Again, my curiosity is piqued. Exactly how would you have specified usage for buffer objects? Bear in mind that concepts like “AGP” don’t last forever; even video memory itself may fall by the wayside as a location for permanent storage. Also bear in mind that buffer objects are not limited to mere vertex data.

I’m not entirely happy with the usage hints. I think they could have been a bit clearer as to when to use DYNAMIC. But overall, I think they were a pretty legitimate part of the buffer object API.

I can’t say who is to blame that instead of a real upgrade to something modern we got this half-assed changes.

The ARB. They were working on it, and they failed to get it done. So instead, they tried deprecation rather than a single big change. NVIDIA torpedoed them on deprecation, so they’re stuck with the old functionality.

like the global application state for example

You keep talking about this “global application state” as though it is some abstract concept. It isn’t. It’s called the GPU.

You only have one context because you only have one GPU*. You are rendering to a single thing. And that single thing has certain state. And changing that certain state has a cost. By exposing the context, you are able to determine how much state you are changing and find ways to change less state. A purely object-based API with no context, where you can a render function with a VAO, Program, Texture Set, and FBO render target, would make this much harder on the implementation.

The actual problem with the context isn’t that it exists. It is that the context is used for more than rendering. When you bind a texture, it could be because you want to render with it on that texture unit. Or maybe you just want to upload some data to it. The implementation doesn’t know. So there has to be a lot of back-end work that figures out when you are just poking around with object state, and when you really want to use the object.

  • I’m aware that there are a lot of multi-GPU systems out there. But the drivers do their best to pretend that these are a single GPU.

So to boil it down from my point of view, GL 3.x core contains all the mess OpenGL implies but none of the convenience that previously made up for it. No, thank you, I’d say.

There is one good thing that comes out of this: it acts as a strong demarcation point. As new features are added to the API, they will be increasingly incompatible with legacy functionality.

It’s not much, admittedly. But it’s something.

not to mention that much of the new stuff would have been added to 2.1 as extensions

Did you miss the part where I said, “all new stuff will be limited to 3.x core?” That includes extensions. Implementations decide what to expose on what hardware.

When you bind a texture, it could be because you want to render with it on that texture unit. Or maybe you just want to upload some data to it. The implementation doesn’t know. So there has to be a lot of back-end work that figures out when you are just poking around with object state, and when you really want to use the object.

i thought that most implementations postponed any decision making of this sort until a draw/read/write operation is executed? All you’re doing with things like glbindtexture is setting a bit in a bitfield. The real work (ie. setting the states GPU side) is done when you draw/read/write some kind of resource - and at that point it knows what you want to do (and can even defer setting state irrelevant to that operation, such as the blendop if all you’re doing is updating a textures content).

i thought that most implementations postponed any decision making of this sort until a draw/read/write operation is executed? All you’re doing with things like glbindtexture is setting a bit in a bitfield.

Exactly. Imagine what OpenGL could do if it could tell the difference between bind to edit and bind to render. It could give you errors at bind time. For example, if you bind an incomplete texture currently, this is perfectly legal currently. If OpenGL implementations could tell the difference, it would give a GL error immediately, not some time later thousands of lines of code away from the source of the actual problem.

Imagine FBO creation and management in such a system. An implementation could give an error when an improperly created FBO is bound to the context, rather than when you render with it.

It’s much harder to tell what state happens to be incorrect when you draw than when you first set that state.

no, I want my pipeline to be parallel! I’d rather use a sync object to check the error state…and maybe wait for the sync object if I care about the result straight away (to aide debugging, I suppose). I haven’t read the whole of this thread, but you seem to be in favour of synchronising the implementation with the application thread.

I haven’t read the whole of this thread, but you seem to be in favour of synchronising the implementation with the application thread.

I don’t see what that has to do with anything being discussed.

When you call glBindTexture, the application must immediately fetch the object associated with that texture. This is so that it can modify that object, or get pointers to video memory for that texture’s data (if you render with it).

When you call glDraw*, the implementation must immediately get the state for that object and copy it off somewhere for rendering. It must set that the texture in question is in use, so that attempts to destroy or change the object can be delayed.

So don’t think that “nothing” happens when you bind a texture and change its state. A lot has to happen. But none of it causes a synchronization of the rendering pipeline. And nothing I’m proposing would cause such a synchronization either. It simply more correctly describes what it is you want to do. Binding means I want to draw with the texture, and it means only that. Whereas now there is some ambiguity.

that certainly doesn’t tally with what I’ve read from nvidia on the subject of bindless graphics. I understood that things like that immediately get posted to another thread, and subsequent lookups of object state and validation happens there. Calling glGetError stalls the application thread waiting for the drivers thread to return the error state. It doesn’t flush the pipeline, but it makes your app sync with the driver thread. But you probably know more than I do. I’m always getting the wrong end of the stick.

that certainly doesn’t tally with what I’ve read from nvidia on the subject of bindless graphics.

What I described is exactly why bindless was invented. So that you can not have to bind buffer objects in order to render. Buffer object binding (or rather, attaching a buffer object to the VAO state) is expensive because it must access various buffer object state data, rather than just moving a pointer around.

i should shut up and read the thread!

I re-read this topic with great interest just recently. It moved way beyond the reason why I opened it. But I would like to take the opportunity to go back to my main problem with ditching display lists.

Someone wrote that one does not need to use any wgl functions in Windows to display text, textures could be used instead and hardly anyone uses extruded 3D text anyway. The problem is that I do, extensively. If I can’t use wglUseFontOutlines with a >=3.0 context then it is a bigger problem for me than just making a decision about display lists (even if nvidia still supports them).

I know that there are several libraries available as alternatives to wglUseFontOutlines but they seem to be made for OSs other than Windows (Linux mainly), and they require other libraries that work only if other libraries are present, etc. It is not even clear to me to what form they convert font glyphs. At least wglUseFontOutlines is very simple. One display list for each character in a font is perfect, unless you need to write text in a scripting language.

Does anyone have similar needs? 3D text and the urge to move beyond OpenGL 2.1?

How about GetGlyphOutline() ? Here’s an example of custom rasterization of the raw curve data it provides:
http://www.antigrain.com/tips/win_glyph/win_glyph.agdoc.html