ATi and nVidia working together to make OpenGL 3.0

I think the layered stuff should be implemented by one independent group and it should be completely encapsulated from the LM layer.

Otherwise vendors have three options:

  1. In fact create a layered API, but sacrificing speed for it.

  2. Don’t stick to the layering idea and also optimize some high-level stuff, therefore having an advantage above their competitor.

  3. Don’t really care about the layered API at all, forcing everybody to do their own layering on top of LM.

It just feels as if nVidia would take route 2 and ATI route 3, but that’s just speculation.

Anyway, an additional disadvantage would be, that by letting the IHV also do the layered API, the actual problem that we have today (too much work for the driver-writers to do everything right), would not be solved and developers could still not know the quality of the implementation.

If we have ONE layered API and the LM API inside the driver, we would always know the quality of the layered API and errors could be detected and removed easily. If the IHVs also do the layering, we never know, if the problem is in the layered API, in the LM API (in our own code…) and if it makes sense to ditch high-level stuff and get down-dirty just to avoid possible errors in the layered API, thus making the whole layering idea completely pointless in the end.

Jan.

Originally posted by V-man:
Functions like glFrustum, glOrtho, glCliPlane should have floats versions and the double version of functions should be killed off until such hw becomes available.
Surely glFrustum, glOrtho and the like should not be part of a low-level API? These seem to me like perhaps the most pointless functions in GL. Why should the GL deal with making the projection matrices, can anyone justify hardware optimisation of this?

But perhaps I am missing the point. In my view that is extremely high-level functionality but perhaps that isn’t exactly the focus. I would leave such things to a public utility library. Perhaps to be as readily available as the GL headers.

Whether a layer is written by a neutral third party or by the hardware vendor, it needs to exist within the driver or we lose backward compatibility. Breaking existing apps is considered unacceptable.

Layering is conceptual and there is no promise how an implementation may choose to structure the driver code. The fact is, if a neutral third party writes the layer and it runs at 50% of the speed of the native implementation, this won’t fly in the market.

Officially deprecating functionality by “layering” it has two benefits which are independent of the actual implementation.

  1. It documents the features which new development should be discouraged from using.

  2. Its grants us the freedom to write new extensions without being bogged down by consideration of the interaction with legacy functionality.

Whether or not the actual implementation of legacy features is “layered” is transparent to the developer. Much of OpenGL 2.0 is already layered; you just don’t see this externally. All we’re doing with this proposal is exposing this fact to the developer.

glOrtho, glFrustum, and material changes are fixed-function concepts. These are already layered on shaders internally. I don’t expect these to move into hardware anytime soon.

Display lists do offer the opportunity for drivers to optimize certain kinds of state changes (e.g. material changes). In practice this has not worked out well; consider an implementation which needs to examine five pieces of state in order to set up a particular hardware block. If a display list contains only four of the five, there is no way to optimize this; at display list execution time the driver still needs to do the same work as if the state changes were provided by immediate mode.

A better solution to this problem is state objects, which can encapsulate related sets of state. One example of a state object which exists today is a texture object; but there is room for improvement even in this example. Another example is a shader, which subsumes many pieces of fixed function state.

Ok, trying to wrap my head around this:

You would have a core set of functions, and implement all(?) legacy functionality (IMM/DL etc.) in a new layer, using that core? This way extensions would only have to fit the core, and their impact on the specification would be smaller (as the specification itself). I suppose functionality in the legacy-layer could still be accelerated if anyone wanted to do so - e.g. quadrics might be done in hw if a vendor really wanted to.

As a whole, I don’t see any problems with this, as long as this layer can not skipped by some vendors - which, I predict, would basically kill OpenGL… it is “lacking”() utility-functions enough compared to d3d as it is. It might become difficult though, to use the legacy functions with new extensions - but that is already happening anyway.
(
) no, I do not think those utility-functions belong in a graphics-library…

I believe one of the forces of OpenGL is the availability of extensions and the possibility for vendors to experiment. Making it easier to do so by limiting the work associated with creating a new extension might not only promote using OpenGL for implementing cutting-edge features, but could very well make the extension-specifications more accessible due to their assumes decrease in size (ie. less of that: “On page 16 the following should be added to the spec…”).

I do hope this will be the scenario, and that the re-design of the OpenGL will not end up with a managerial decision to cut driver-costs by making it more like D3D.

Whether a layer is written by a neutral third party or by the hardware vendor, it needs to exist within the driver or we lose backward compatibility. Breaking existing apps is considered unacceptable.
Not necessarily.

Essentially, right now what we have is the OpenGL32.lib that binds itself to an ICD driver. What could happen is that they provide an OpenGLv30.dll ICD that itself attaches itself to the OpenGL Lean and Mean implementation provided by the IHV. Each IHV driver would come with the OpenGLv30.dll ICD, but they would have gotten that .dll from a centralized source.

Gold, I have another question about how the layering would work. Obviously, existing apps need to continue to function, un-altered. However, each vendor supports a disparate set of extensions. Each vendor will have to write their own layer, in order to avoid losing functionality (which I assume they will do). I don’t expect any Generic OGL3 Full layer to support register_combiners, for instance.

Also, to really do the layer without taking a large performance hit, the ‘layer’ is going to need to have it’s tendrils go deep into the LM implementation, losing much of the advantage of simplified drivers.

So, the questions are:

Will OGL 3 LM apps be distinguished by the fact that they just are not using OGL 3 Full functionality, or will they actually be in an environment where layered features are not available (declaring their LM-ness ahead of time by some means)?

I assume the latter will have to be true, if driver developers ever hope to work on a simpler codebase.

Will an OGL3 LM application be source-compatable with OGL3 Full?

Is the intention that post-OGL3 extensions will also be accessible to OGL3 Full applications? i.e, will it be possible to write extensions that can only be used from a LM application? will this be expected behavior from new extensions?

You just say…all previous generation cards will not support the Lean & Mean layer, they will support the old OpenGL system (GL1.5), and therefore use the existing OpenGL drivers without further support.
Then for all current/next generation cards, legacy extensions such as register combiners will just be emulated using the shader language to maintain software backwards compatibility (which is probably what’s done now, as hardware combiners aren’t todays technology).
There has to be a cutoff point, where the Lean & Mean mode is only implemented on hardware that it successfully abstracts.

Basically we are moving from a relatively “low level” API, to an “high level” one, which puts more pressure on the drivers team, yet having less entry points, make drivers coding simpler…
Abstraction is often a good thing.

Is that it ?

No, at the moment OpenGL as it stands has evolved into a high level API because the API has not changed while the hardware has changed, therefore OpenGL function calls require the driver to ‘translate’ the command and current states into hardware instructions that bear no resemblence to the original application code. This is slow and vulnerable to bugs.
The proposed change is to create (out of the current OpenGL 2.0 extensions) a new lower-level API. The existing OpenGL API would then translate its commands and states into calls to this lower-level API. As a result, we get a nice, clean and fast API which closely echo’s what is actually happening in most hardware, but our older OpenGL applications still work fine.

But if I correctly understand gold’s reply, this whole layering business is a purely conceptual thing.

So we still won’t get a nice and clean API, until perhaps OpenGL 80.0 will finally remove all layered functionality of OpenGL 2.0, but still keeps everything from 3.0 to 79.0 around, so it won’t be clean either…

Perhaps I was expecting a bit too much, but to me it looks like this is going exactly the same way as the original OpenGL 2.0 proposal.

Could it be, that they just want to disappoint as again, so that we finally move over to D3D?

Can’t they see we need a NEW and CLEAN API???

You can’t expect support for old apps to be dropped. At least with developments like this we’ll be collecting less garbage. As a developer, you’ll have the choice to use the new cleaner API.

It’s not like D3D isn’t faced with similar issues and with OpenGL being the standard on many (emergent) platforms and efforts like this by Nvidia and ATI, in the long-run, I see the death of D3D, not OpenGL. Just the fact that it is “open” says something for its future.

Noone expects that support for old apps is dropped.

But after the initial reports, I expected to get a layered backwards compatibility layer on top of a new and clean API. But it seems they just call it layered, but really it will stay the same it is now. Of coures it will be layered, but only as far as it is already layered now, as driver internal implementation detail.

With this we loose two really important benefits:

  1. Driver development will not be easier than it is now, since all layered functionality will still have to be implemented by the driver writer.

  2. Legacy features will never really disappear over time, and they can never be really dropped. That’s a problem because there is no clear distinction between “new” and “old” API. When there is a clear cut, we wouldn’t have to ask us if we should drop legacy features. They are just not present in the “new” API, and anyone who likes to use them has to use the “old” API.

Sorry if I sound so disappointed, but I really expected some things being “removed” from the core, and placed into an utility library. In the long run this will be inevitable, so better do it before the API gets really ugly.

And before someone complains, yes, this is possible without breaking backwards compatibility.

Sounds like it won’t happen (again).

Could it be, that they just want to disappoint as again, so that we finally move over to D3D?
The comparison to D3D seems quite apt.

When a new version of D3D comes out, it comes with a new API. But it also includes the old APIs, as well as a layer that converts those old API functions into the new API (actually, probably going directly to the driver underneath).

This is exactly what they are proposing for OpenGL. Except that it will probably be possible to use some of the layered functions when working with the L&M API.

The key point is this: driver developers won’t have to write/maintain the layer.

That’s a problem because there is no clear distinction between “new” and “old” API.
There isn’t? So, you’ve seen this new API in action and have some real facts for us?

You’re getting worked up over conjecture and speculation. We have very little information about the ATi/nVidia proposal, and we have less information about how the ARB will decide to implement it.

Originally posted by Korval:
The comparison to D3D seems quite apt.

Just to improve my english: What does “apt” stand for?

Originally posted by Jan:
[b] [quote]Originally posted by Korval:
The comparison to D3D seems quite apt.

Just to improve my english: What does “apt” stand for? [/b][/QUOTE]It is short hand for “appropriate”.

Umm, yeah. Guys, don’t go nuts with the speculation. While there are some details to be worked out, our goals include:

  1. Backward compatibility for OpenGL 1.0 apps
  2. Clear direction for app developers which paths are “preferred”.
  3. Lower maintenance burden for the driver.

I notice that items (1) and (3) are making some of you nervous, as they appear to be contradictory. Its not my intention to publically debate our implementation strategies, but I will say that we have a plan, and I’ll ask you to reserve judgement and not turn this thread into something ridiculous.

If you have questions about the functionality in the proposal, and not “how the hell do you plan to do that??”, I’m happy to answer.

I’m not doubting point 1 and 2, and I’m very glad to hear point 3 has high priority for you, because some of your earlier posts seemed (at least to me) to contradict point 3 (no, I don’t think 1 contradicts 3).

Perhaps I interpreted a bit too much into this sencence:

it needs to exist within the driver or we lose backward compatibility
What I’m mainly frustrated about is the missing point 4, clean up the API. I know, this seems to contradict point 1, but this is not actually true.

I’m not saying anything proposed here is bad. In fact I think it’s a great development, I mean, 3 of the top 4 points on my future developement whishlist is not that bad :wink: .

I just think it could be better. And I always get a bit frustrated when I find out that things actually aren’t as good as I initially thought, especially when noone can tell me a good argument why it can’t be the way I would prefer it to be.

First I don’t understand how this conceptual layering can lower burden for the driver writer, when on the other hand the layer needs to exist within the driver. But after the last post, I think I’m going to accept that you have that angle covered if you prefer not to reveal that detail.

What I’d really like to have an explanation for is: Why does the layered functionality need to exist within the driver to ensure backward compatibility? Applications don’t use the driver, applications use the dll, so why isn’t it possible to just put another level of indirection in between for every application that uses the old dll, and make a “clean” new dll for new applications?

I’m no windows developer, perhaps there’s some stupid limitation on this particular platform why this can’t be done, but on linux this would be trivial to implement.

There isn’t? So, you’ve seen this new API in action and have some real facts for us?
No, I’ve not seen this new API, but I’ve seen some posts here:

Layering is conceptual and there is no promise how an implementation may choose to structure the driver code.

Whether or not the actual implementation of legacy features is “layered” is transparent to the developer.
And I assume gold has seen this new API (or at least knows what it’s going to look like roughly). If I misinterpreted these posts, please tell me :wink:

Cleaning up the API is also a goal. That is implicit in (2).

As for the layer existing inside the driver; consider that even the introduction of a new DLL would not change the fact that existing apps link with opengl32.dll and opengl32.dll calls into the “driver”. If its not there, we’re not backward compatible.

If there is an overwhelming desire to have a new DLL which provides a “OpenGL 3.0” interface, this can be implemented as a wrapper. Any of you could write this, and in fact that would be preferable, since we have plenty of work to do on the internals. :rolleyes: