OpenGL 3.2 support in new nVidia linux beta driver

No stable GLSL spec…

What does that mean?

The 3.2 spesification supported in this driver might just be a draft. But I wish nVidia published temporary documentation for the new features so developers like us can test it out, and maybe spot weaknesses. It would be nice if developers could start experimenting with the new features.

But I wish nVidia published temporary documentation for the new features so developers like us can test it out, and maybe spot weaknesses.

It is a beta driver; it isn’t mean to be widely distributed to begin with. It was leaked from NVIDIA. Documenting a pre-release driver is not a good idea.

Well starting from GLSL 1.2, then 1.3, and a few months 1.4 and now before we even have a working GL 3.1 drivers out (except for nvidia) 1.5, every version deprecating things in its predecessor.

Maybe for good I dunno.

I wish NVIDIA takes over the OpenGL, and becomes the driving force behind its specification.

We know that market force dictates control, and we know that we submit our vote with the “long green” ballot.

Anyways, I think the deprecation mechanism is going to work out better than a lot of folks expected, including myself.

It needs more than just promoting vendor specific extensions. New core API functionality has to be introduced, such as geometry shading, direct state access, and moving the object/bind mechanism to deprecated stuff.

GPU feedback, better FSAA integration than having to create the context twice or so…

As we discussed in this topic we have reason to believe that Geometry Shaders will be part of the core spec of OpenGL 3.2. So if that is the case it will not be missing.

Why isn’t it a good idea to release a draft of the new spesification? The driver and the upcoming OpenGL spesification will be more thoroughly tested, and developers will have a chance to send their feedback.

Indeed! Test early and test often! Less chance of doing something that your customer doesn’t want that way too.

Graphics drivers are not something to be taken lightly. Graphics driver bugs can make your computer unusable. At no time do NVIDIA or ATI want anyone to use beta drivers, drivers that are known to have bugs in them. And they certainly do not want to make these leaked drivers seem official by providing documentation and downloads for them.

Plus, the rules of Khronos seem to prevent releasing information on upcoming specifications outside of official channels.

“At no time do NVIDIA or ATI want anyone to use beta drivers”

Oh really ? Just yesterday nVidia gave me a beta-driver for my notebook through the general user download-site. I checked no “include beta drivers” box.

It works fine, though.

Jan.

True. nVidia uses public beta’s (they probably also have non-public beta drivers, but I don’t know about them). AMD uses non-public beta drivers.

New features in nVidia’s beta drivers are also announced (like when OpenGL 3.1 and OpenCL entered the nvidia drivers). It is only because OpenGL 3.2 is not officially announced that there is no documentation for it yet.

Just to follow up… if you notice errors or omissions in the specs you can file a bug report with a Khronos bugzilla account (even the likes of yours truly has one).

Indeed! Test early and test often! Less chance of doing something that your customer doesn’t want that way too. [/QUOTE]

That is an excellent question. I’ll answer separately for the specification and drivers.

The OpenGL ARB is part of Khronos, as you know. Khronos has a set of intellectual property (IP) rules aimed at protecting any member who contributes ideas to the specification and any member who uses the final specification to implement OpenGL. Part of the process of releasing specifications outside of Khronos is a 30 day “ratification period”. In those 30 days, all Khronos member companies are supposed to look for any IP that they own that has made it into the specification. After those 30 days, the Khronos board of Promoters votes to ratify and publish the specificiation. If a member company does not say anything during the ratification period, then they have automatically essentially licensed their IP for free to anyone who uses the specification to implement OpenGL. This is called a “reciprocal license”. If a member company does not want to license their IP, then the rules spell out clearly what they need to do to keep their IP protected.

The exact wording, in all its nitty gritty details, you can find here, if interested: http://www.khronos.org/files/member_agreement.pdf

What this basically means is that no document that the ARB works on, be it an update of the core specification, the Shading Language, or any ARB extension, can be shown to anyone outside of Khronos, until the 30 day ratification period has passed. As I said before, this process is there to protect everyone involved with Khronos. Without this process, Khronos would not be able to function. There are over 100, some small and some very big, companies part of Khronos. They all have IP portfolios, and they value those highly. There has to be a set of rules governing IP involved with the work that Khronos does.

In the balance, I think the Khronos process works really well. The old OpenGL ARB (before joining Khronos) did not have a good IP framework, and as a result anytime someone even hinted at a patent, the affected feature was put on hold, and not talked about again.

Feedback from others does make it into the specification, however. Your constructive input on these forums is very valuable. Many ARB members read it. One example of such input is the thread “Talk about your applications” http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=246133#Post246133 Keep it coming!

Other ways that input makes it into the specification is through vendor extensions. Those are often written and implemented by a vendor(s) because there is a real developer need. The important and succesfull extensions make it into the core specification eventually. Maybe not the next release, but it can be the release after that.

Ok, lets talk about drivers. Drivers are released often. Bugs are found, sadly enough. If you file a bug with the driver vendor, or mention it on this forum, there’s a good chance it’ll be acted upon and fixed in the next driver release. AMD, S3 Graphics and NVIDIA all have provided OpenGL 3 (beta) drivers soon after the specification was released. Thus you, as a developer, do get access to drivers to test out quickly.

Hope this helps!

Regards,
Barthold
OpenGL ARB WG Chair

If it is supported by everyone, the only benefit to moving it into core would be tidiness, really. There is a stronger value to move something into core when some subset of the vendors are dragging heels on supporting a feature that everyone has in their silicon already (or in the other API) - in those cases it creates pressure for the vendor to add support for it so that they can claim compliance with the latest specification. This isn’t the case with aniso filtering.

i.e. - if it’s supported everywhere, then use it and be happy :slight_smile:

There is a corpo-political reason why it hasn’t been put in the core specification, it is certainly a topic that has come up more than a few times in working group discussions. It wasn’t simply forgotten or overlooked, there was a roadblock to making that change.

Understandable. But maybe, for the sake of tidyness :wink: it could at least be given ARB status some time.

Anyway, i still think the LP object model is the most important API feature, that needs to be introduced soon. Immutable state-objects and all that stuff would be extremely helpful in reducing buggy code (from both sides, driver writers and application programmers) and ensuring best performance practices.

Also all shader interfacing is a nightmare in more complex applications (ie. bind vertex arrays, uploading uniforms). VAOs are from the idea a solution to the current mess, but only performance wise (on paper). For fast switches, you need hundreds of VAOs, and so far i have found no one who says, that he could detect a speed up.

Jan.

If it is supported by everyone, the only benefit to moving it into core would be tidiness, really. There is a stronger value to move something into core when some subset of the vendors are dragging heels on supporting a feature that everyone has in their silicon already (or in the other API) - in those cases it creates pressure for the vendor to add support for it so that they can claim compliance with the latest specification. This isn’t the case with aniso filtering.

i.e. - if it’s supported everywhere, then use it and be happy :slight_smile:

There is a corpo-political reason why it hasn’t been put in the core specification, it is certainly a topic that has come up more than a few times in working group discussions. It wasn’t simply forgotten or overlooked, there was a roadblock to making that change.

[/QUOTE]

From that I read:

  • no anisotropic filtering in the core
  • as what we already figured would be probable: geometry shader will enter the core (AMD is dragging heels on implementing that one…)

Check out the extension registry, 3.2 specs are up!!

Thanks God (read: ARB) for ARB_sync extension!

/marek

Great! OpenGL 3.2 and GLSL 1.50 is available, and GLSL 1.50 features geometry shaders. :slight_smile: