API evolution - idea exchange

Where did I say that shader objects have a state? You might have to repeat it another time, but what I see is that uniforms are used and declared in shaders, but I did not say that they are located in the shader objects.

Ok, now I will try to read more deeply these ugly specification txt files

Perhaps we should reprioritize API features into 3 categories: Critical, Substantial, and Minor.

A critical API feature is something where you need new API to do something that is either impossible or exceedingly inconvenient to do without it as the current specification lies. From my list, that would include split vertex/fragment programs and binary blobs. Neither of these is fundamentally possible without explicit API support.

A substantial API feature is something that you can currently do, but is rather onerous for the user and the workaround inhibits the ability of the user to fully use the hardware. From my list, the two issues surrounding uniforms both qualify. While constantly updating uniforms may not be a significant performance bottleneck, there is no doubt that the implementation could do a lot better if it weren’t doing so. And the implementation would be much better at optimizing shared uniforms than something layered on top of it.

A minor API feature is something that is purely user-niceness. That is, there’s something unpleasant, unfortunate, or arcane in the API, and you need a modified API to correct it. Direct state access, and my other two suggestions, fall into this camp.

The thing I most want to stress is that, despite having an implicit prioritization between the 3 levels, these are all important and should be addressed. Yes, the critical API features represent things we cannot currently do, so it is vital to get them repaired ASAP.

And here’s the other thing, though I doubt the ARB is going to much take it under advisement (especially considering who’s on the ARB): pairing hardware features (DX11 features) with improved API is wrong. If an API feature could be used with lesser hardware, it is wrong to force us to require greater hardware just to use it.

Korval - with regards to your last point this is precisely why we made an effort to factor out pieces of functionality into ARB extensions which can be delivered on pre GL 3.0 hardware. For example, MapBufferRange.

this is precisely why we made an effort to factor out pieces of functionality into ARB extensions which can be delivered on pre GL 3.0 hardware. For example, MapBufferRange.

The problem is that the deprecation system that makes it possible to have APIs replace other APIs is not compatible with older hardware. And without that deprecation mechanism, a driver that supports GL 2.1-class hardware must deal with all of the problems of the GL 2.1 API.

I just want to make sure that this does not happen again. My biggest concern is this situation.

3.1 comes around. It has some API improvements. Let’s say that direct state access is promoted to core, with an appropriate extension to 3.0. And as one would expect, all non-direct access methods are made deprecated in 3.1. That is all to the good, until 3.2 rolls around.

Now, let’s say that 3.2 comes out with the removal of deprecated features from 3.1. This is all in tune with the general, slow cleanup of the API. But 3.2 also needs to support DX11 or even higher levels of features.

Now, the removal of all non-DSA state access changes how drivers are written. They can now operate on the basic assumption that if you bind an object to the context, you intend to render with it. This is a perfectly valid and reasonable assumption, and it allows drivers to optimize better. But this only works once the non-DSA functions are removed, not just deprecated.

However, 3.2 also got bound up with new DX11-class features. And since you can’t provide an extension that says, “deprecated features are removed,” you have now tied an optimization with hardware. Even with the “cored” version of the DSA as an extension, the driver implementing that extension still must allow non-DSA access. Which means that the previously mentioned optimizations cannot happen unless you’re using a real 3.2 context.

The ARB needs to find some way to deprecate APIs through extensions or something. Or all hardware features need to come through a different path other than core promotion. We have a way to add hardware features and API features as extensions. But the benefits of new APIs usually only matter once the driver knows that the old APIs are actually unavailable.

I strongly agree with all of Korval’s points here.
Features like integer support do not directly impact the core API, and many people wont even use it, so hardware supporting it should not have been a requirement of the core API.
If the OpenGL API does change in the future and a software developer wants to use it for its improved performance and stability then they will write an OpenGL 3.1 renderer.
But if they need to also support the previous generation of hardware then they are forced to write a second “Legacy” renderer, even though both may be using identical features.

I can understand that major changes such as shaders in 2.0 will change the entire API and in this case the hardware should be linked to the API version.
But where we are simply adding an extra feature (eg. a tesselator) which some people will use and some wont, then it should NOT be in the core API.

I would like to suggest “Required Extensions” as a possible solution.
This would be an extension that a vendor is required to support in a driver, for any hardware capable of supporting it.

Say we have a new extension GL_ARB_tesselator, if the 3.1 specification named this as a “required extension” then a driver could only claim to be an OpenGL 3.1 driver if it advertises this extension for all hardware that can support it.

For older hardware the 3.1 API will still be available, but without the tesselator functionality.

But this is the wrong way around, API improvements should be in the core and hardware features should be extensions.
If hardware can’t support a feature you can’t add it later, whereas a change to the way the API works can be implimented on several generations of hardware.

Linking specific hardware to an API version is the way Direct3D does things, and is one of the most annoying things about their API.
If you are writing a game thats only intended for the latest hardware then its not that bad, but those writing CAD or visualisation systems need to support several generations of hardware and would rather not have to write a separate renderer for each one.

There’s a tension here - if all new hardware features are extensions, then you get an extension forest. A strength of GL3.0 is that the forest has been cut way down (not to zero, but close). Raising the hardware floor was the cost of that move.

It should be pointed out that just because GL 3.0 raised the hardware floor, there is no guarantee or specific need that would force 3.1 or 3.2 to do the same thing. It might be time to do it again if (as with GL2) the weight of functionality expressed by extension got too large relative to that within the core.

(ISV hat) I want all the IHV’s to implement the same extension set. It means fewer code paths for me to worry about. The missing link is that there are only two places presently in OpenGL where functionality can go: “core, required” or “extension, optional”.

If post-OpenGL 3.0 fifth-generation programmable GPU hardware has (say) five new features, it might be nice from the ISV POV if all five could be put into a single uber-extension, and the IHV’s either support it or they don’t, rather than pick one of the 32 possible permutations of support for the five individual features. One giant caps bit for that new generation.

That would offer a middle ground between the two extremes above, and the core API would not need raise the hardware floor.

I agree with Rob, I want a feature set of this min level to code for when I say I am using a specific GL version. Oh the nightmare of not knowing what you are going to have access to, hell that is like going to a construction site with no tools and told to build something and not having any idea what tools you have access to.

That would offer a middle ground between the two extremes above, and the core API would not need raise the hardware floor.

Fine.

But I doubt you’re going to get that past ATi and nVidia, who will want to tie API features to hardware features, since it helps push hardware and they don’t have to support new features in old hardware.

I like the idea of bundled extensions, where the ARB controls the packs more tightly and basically says “these extensions come together or don’t even bother, but do keep in mind that they will be core as soon as we get around to it”.

The extension list should be kept short in my opinion, there are some 353 extensions, many have in some form made it into the core, those who hasn’t are either about to or are currently unusable because so few IHV’s support them.
There should only be 3 types of extensions, core, approved EXT packs and experimental/debug where the last one would have to be activated in order to be used.
The extension registry should reflect this by putting the extensions where they belong and have a separate registry for old and discontinued ones, but that might still exist somewhere (seriously when was the last time you needed to use GL_SGIX_sprite or GL_EXT_index_material)

But that defeats one of the original purposes of the extension mechanism - to allow implementations to expose cool new features.

NVIDIA still does that - the NV_* versions of extensions often contain more specific functionality that can improve performance compared to the EXT_* or ARB_* extensions. Apple does it to add features that their developers want. Even MESA has a few extensions. Classifying all of these as “experimental/debug” would turn off developers to their use.

Even though vendor-specific extensions can be annoying, I agree with Paladin on this.

The real crux of the problem with extension is, and has always been, the ARB’s inability to get extensions into ARB/core forms in a timely fashion. Having something like what Rob suggested, with feature APIs being extensions that are “near-core” and bundled together in functionality batches (you must provide all of them), would solve a lot of this. That way, the API could grow in hardware features and not hamper API progress.

Well, perhaps after the GL3 fiasco, the ARB will get their act together on getting features into core. The deprecation model means that figuring out how to make it work with older bits of the API is a moot point.

[On that note, I have a question for ARB people: Is there any reason a vendor could (or couldn’t) create an extension that only works on forward-compatible contexts, so they don’t have to deal with API cruft in the design? Seems to me that allowing that would make extensions easier to integrate, since interoperability of features would be more constrained]

In fact there is established policy that, going forward, new extensions need not describe or specify interactions with deprecated functionality.

Yea and i love every one of them i get to play with, but one of my points still stand, they are useless for an release application unless you want to write multiple paths for different vendors.
These things need to move to EXT pretty much directly.

ATi’s unstated-but-clearly-obvious policy from now on appears to be to ignore all non-core extensions. So I wouldn’t even look to EXT extensions; they won’t bother unless it’s core or a core-extension.

But if one big IHV does support it and another doesn’t then we still have the problem.
Thats why we need “extension, required”, to make the uber-extension standard on as much hardware as possible, without limiting the new API to only that hardware.

OR how about having separate Hardware and API version numbers.
Instead of an uber-extension that contains all of the extensions of a particular hardware generation, call:

GetInteger( Hardware_Generation )

If this returns a value of 9 then we know that we can use all of the standard extensions & functionality for 9th generation hardware.

IMHO, there is a need for a new API. Exactly what i said, NEW API.

Forget everything all, we need a cross platform modern new api.

And this is why I use Nvidia hardware and ATI’s lack of GL commitment is pathetic… IMO

And what is sad is used to use ATI hardware a lot since the Radeon 8500 up until the GF 6 series came out.

[quote] Originally Posted By: Korval
ATi’s unstated-but-clearly-obvious policy from now on appears to be to ignore all non-core extensions. So I wouldn’t even look to EXT extensions; they won’t bother unless it’s core or a core-extension.

And this is why I use Nvidia hardware and ATI’s lack of GL commitment is pathetic… IMO[/QUOTE]
I’m not sure this is true any more. I’ve been monitoring ATI’s extension support since last September and every release has added at least 2-3 new extensions, sometimes more.

That’s not to say everything is working perfectly (I’m mainly working on Linux), but they are showing commitment and are steadily improving the drivers. Things are much better compared to even 1 year ago.