ATi and nVidia working together to make OpenGL 3.0

but leave the geometry stuff, because some IHV seem to do a really good job with it.
Yes, absolutely. if you look at the gamedev link which summarizes the GDC presentation, under the section which mentions display lists as candidates to move out of the LM profile, is another section which discussions replacements and additions:

Geometry-only display lists (for small static data, for instance) (no GL_COMPILE_AND_EXECUTE, since it’s very problematic to implement efficiently)

Originally posted by MikeC:
[b]

  1. What’s the current status of the legal agreement that precluded anyone but Microsoft from shipping opengl32.lib/.dll for Windows? If it still holds, and if the price of sidestepping that agreement is a name change, this is probably as good a point to do it as any. I’ve heard from several OpenGL toe-dippers that having to use GL2 as 1.1+extensions is severely offputting.
    [/b]
    I didn’t see anything about sidestepping.

What would give toe-dippers the impression they are using core GL instead of GL 1.1+extensions?

Originally posted by V-man:
What would give toe-dippers the impression they are using core GL instead of GL 1.1+extensions?
I may be putting words in their mouth - it was a brief conversation - but I think they expected to be able to just #include a .h, link to a .lib and have up-to-date API features available out of the box.

I may be putting words in their mouth - it was a brief conversation - but I think they expected to be able to just #include a .h, link to a .lib and have up-to-date API features available out of the box.
Well, just have them download a second library and link to a second header. One of the many extension loading libraries out there.

Originally posted by Korval:
Well, just have them download a second library and link to a second header. One of the many extension loading libraries out there.
Korval, you know that, and I know that, but to beginners it’s not obvious a) that they should, b) why they should or c) why there’s a choice of umpteen different extension libs for a standardized API.

How many “where can I get OpenGL 1.x/2.0” threads have we seen in Beginners over the years?

Korval, you know that, and I know that, but to beginners it’s not obvious a) that they should, b) why they should or c) why there’s a choice of umpteen different extension libs for a standardized API.
I understand the problem. But there is no solution to it. Even if you made a new GL 3.0 lib, it’d be outdated with GL 3.1 features.

<aside>
How the API is physically delivered is a completely different topic, and quite a boring one. Start a new thread if you want to turn your faces blue. I’m assuming ‘gold’ is from either nvidia or ati, and hence could be used as a mouth piece on the future of the OpenGL API, so shut up about the bloody header/lib thing…please.

<on-topic>
gold, I did read that about display lists, but just wanted to enphasise that I do actually use display lists for geometry optimization, so please speak up for their continued inclusion.
Apart from that, I think what should definitely be included is a mechanism for querying the best formats/setups for a specific implementation.

Why can’t the optimized goemetry be stored in geometry buffers, instead of using display lists for that? Geometry buffers seem to be the right abstraction for it.

I ordered a large pizza with the works, to celebrate this news. I think OpenGL has its best days ahead.

knackered, enough about the display lists already.

@tarantula: What exactly do you mean with “Geometry Buffers”? Do you mean it should be put into a VBO behind the scenes?

I don’t like the huge amount of work necessary to get to using an extension. However, as long as there are free libraries i can use for this, i don’t think it’s an issue.

And yes, i also like to have a mechnism to ask the driver how it likes data best.

Also i think, OpenGL, like D3D, should offer a few more basic things, like being able to query the amount of Video RAM or setting VSync (which is still a WGL Extension!).

Jan.

Originally posted by Leghorn:
I ordered a large pizza with the works, to celebrate this news. I think OpenGL has its best days ahead.
I think we all felt that way already when the original 2.0 Spec was released. :mad:

With the fiscal and mental might of ati and nvidia behind it, I should think that it’s going to actually happen this time :wink:

Less differences mean less polarized driver-development.
True, but I would be careful in suggesting that a better API is an emergent property of disparate driver implementations.

Geometry buffer would contain Index Buffer, Vertex Buffer, primitive type etc, a container holding all data necessary for the geoemtry. I was not talking about what happens behind the scenes but what is the interface it exposes, its kinda inelegant to still use the display list to encapsulate geometry!

well, obviously they wouldn’t be called ‘display lists’, tarantula - full-feature display lists would be emulated in the layered mode. I don’t care what they’re called, ‘geometry buffers’, ‘soup bowls’, ‘sausage skins’, I don’t care. I believe a mechanism for the driver to build an efficient structure from your geometry data is very important, and display lists fill this role nicely at the moment.
Ok, leghorn, I shall put the display list jacket back on the hook.

It seems I started a really useless discussion with my little side comment about the new dll, sorry for that, I should have explained better.

My main question seems to have been swallowed:

Who will write this “full OpenGL on top of OpenGL LM” implementation? As I understood it, this would be something that could be done hardware-independant.
I mean, if there is really a “layered” implementation, it should be a seperate library, for example like GLU is now. This library can be tested independant of the actual driver implementation.

Assuming the library structure stays as it is now, the responsibility of writing the layered part is still with the driver writer, and we wouldn’t even notice if the “layered” part is really layered on top of OpenGL-LM or not. So there is still a lot of room for driver bugs.

On the other hand, if the libraries are split into a “legacy” OpenGL library that implements the same interface as it is now, and a seperate LM-library (implemented by the driver), then the legacy library could be implemented hardware independant on top of the LM-library.

In this scenario, the question “who will write it” comes to mind…

To prevent further discussion in the wrong direction: This has nothing to do with the extension loading mechanism, I’m all for using the extension loading mechanism to update the core version.

I’m not talking about the interface to the programmer, but about the internal organisation of the implementation. The earlier comment was just meant along the lines, if we make a change that is that radical, removing a lot of entry points, we might as well start over with a new library, and then again use extensions to update to 3.1, 3.2, … But as I said, the real “outside” interface is not important.

About the display lists, geometry buffers:

As mentioned by Jan before, a format abstraction similar to what we have with textures now would be exactly what we need.

Just say to the driver “i want this stored for later drawing in whatever format is most convenient for you”, exactly like we do it now with textures. I personally never understood why the vertex array interface looks like it is, and not more similar to the texture interface. If it looked similar to the texture interface, we wouldn’t even need VBOs, since this detail would be hidden by the driver.

Ok, that’s not entirely true. But the functionality of VBOs would be reduced to an async upload mechanism, similar to PBOs.

>Fair enough, limit its functionality by removing
> nesting, transforms, material changes or
> whatever, but leave the geometry stuff, because
> some IHV seem to do a really good job with it.

But there might be hardware out there that can efficiently do material changes when they’re compiled into a display list. I don’t think their functionality should be limited. Who knows what tomorrow’s hardware could easily do in between other stuff when done the way the hardware wants it (as can be done in a compiled display list).

that’s a good argument for the LM to be supplied by the ICD. Any high-level GL features are always candidates for hardware acceleration.

But this would mean the whole layering would be a bad idea…

The point is, most high-level features are not hardware accelerated, so they should be layered on top of some limited set of functionality that is hardware accelerated (called OpenGL LM in this discussion).

When (and if) a hardware accelerated implementation of a layered feature appears, then OpenGL LM should be extended. The layered part should under no circumstances take a shortcut to the hardware.

Otherwise the whole idea about layering would be pointless, since it would be no different than what we have now, namely a single black-box where some features are accelerated and others are emulated, and we can’t determine which ones are the fast ones…