That’s what I’m doing but that’s precisely what shouldn’t be needed as it makes the code inefficient
First, glBindTexture is not necessarily inefficient. Especially when you’re binding texture 0 (aka: unbinding textures). Binding things does not imply the desire to render with them.
Second, yes, OpenGL requires that you be responsible for the use of the API. That means that you need to be responsible for all use of the API, even usage that you have decided to cede responsibility over to a third party library. You made the choice to use a binary blob library with no source code access, one that makes no guarantee as to what state it is and is not changing. And therefore, you must take responsibility for your choices.
meaning in this particular case that I have to restore the entire texture state for up to 8 texture units each time I call such a function.
You have piqued my curiosity. What would make you think you need to do that?
It’s been a long time since I did any texture environment fixed-function work, but in the land of shaders, it just doesn’t matter. You bind a program, and you bind the textures that this program uses. If there are some other texture units with textures bound to them, nobody cares; it won’t affect the rendering. And if those texture units did have something bound to them, it likewise does not matter, as you will be binding the needed textures for this program.
The only time I could imagine needing to clean out texture unit state would be if you bound a program, did some rendering with it, then called some function that does arbitrary unknown stuff to the texture state, and then wanted to keep rendering as if the unknown stuff had not happened. And even then, you only need to clean out the texture state that the program was actually using.
You can abstract shitty design all you want but there’s always points where the [censored] bleeds through to bite you in the ass.
Absolutely not. If you have an abstraction that allows the underlying implementation to bleed through, this is the textbook definition of a bad abstraction. The whole point of an abstraction is to have the freedom to change the implementation without affecting the interface.
BTW, I think you mussed a [censor] point.
like the stupid hint system to tell how a VBO is to be used, for exmple.
Again, my curiosity is piqued. Exactly how would you have specified usage for buffer objects? Bear in mind that concepts like “AGP” don’t last forever; even video memory itself may fall by the wayside as a location for permanent storage. Also bear in mind that buffer objects are not limited to mere vertex data.
I’m not entirely happy with the usage hints. I think they could have been a bit clearer as to when to use DYNAMIC. But overall, I think they were a pretty legitimate part of the buffer object API.
I can’t say who is to blame that instead of a real upgrade to something modern we got this half-assed changes.
The ARB. They were working on it, and they failed to get it done. So instead, they tried deprecation rather than a single big change. NVIDIA torpedoed them on deprecation, so they’re stuck with the old functionality.
like the global application state for example
You keep talking about this “global application state” as though it is some abstract concept. It isn’t. It’s called the GPU.
You only have one context because you only have one GPU*. You are rendering to a single thing. And that single thing has certain state. And changing that certain state has a cost. By exposing the context, you are able to determine how much state you are changing and find ways to change less state. A purely object-based API with no context, where you can a render function with a VAO, Program, Texture Set, and FBO render target, would make this much harder on the implementation.
The actual problem with the context isn’t that it exists. It is that the context is used for more than rendering. When you bind a texture, it could be because you want to render with it on that texture unit. Or maybe you just want to upload some data to it. The implementation doesn’t know. So there has to be a lot of back-end work that figures out when you are just poking around with object state, and when you really want to use the object.
- I’m aware that there are a lot of multi-GPU systems out there. But the drivers do their best to pretend that these are a single GPU.
So to boil it down from my point of view, GL 3.x core contains all the mess OpenGL implies but none of the convenience that previously made up for it. No, thank you, I’d say.
There is one good thing that comes out of this: it acts as a strong demarcation point. As new features are added to the API, they will be increasingly incompatible with legacy functionality.
It’s not much, admittedly. But it’s something.
not to mention that much of the new stuff would have been added to 2.1 as extensions
Did you miss the part where I said, “all new stuff will be limited to 3.x core?” That includes extensions. Implementations decide what to expose on what hardware.