They were made public a few months ago and we are using them to generate inline documentation for the .Net bindings. Unfortunately the public repository seems stuck with OpenGL 2.1.
Edit: Also gl.spec doesn’t seem to list any “version 3.1” functions, even though “3.1” has been added to the version string. Any timeframe for the update (so we can plan our release accordingly)?
I would just like to add that the EXT_copy_buffer extension is missing from the OpenGL registry. I wasn’t able to find token values for COPY_READ_BUFFER and COPY_WRITE_BUFFER in the updated header files either (used in the new CopyBufferSubData).
It means: even though I request a ‘pure’ GL3.1 context, all the old cruft will be still in the driver!? There won’t be any of the performance benefits that a lean-and-mean 3.1 driver promises.
ARB_compatibility undermines the positive effects the deprecation model should have brought. I demand a possibility to turn it off some way (maybe via a new flag in wglCreateContextAttribsARB). I think, the ones who really need GL_SELECTION (and all that other old suff) in a GL3.1 environment should be forced to deliberately activate it - and then pay for it.
Why not divert the driver development at this point? Why not provide a GL3.0+extensions and separate GL3.1+new_extensions driver?
You want developers to take the pain and rewrite their engines for GL3.x? Fine, then reward them with better performance!
Another thing I’m missing is a proper glPushAttrib/glPopAttrib replacement. The push/pop mechanism made it easy to isolate independend rendering code. There should be some new and efficient way to set/change/restore state in GL3.x without having to do multiple calls to glGetXXX().
Many thanks to Khronos/ARB and NVidia for their great work on OpenGL 3.1 design and drivers. Impressive comeback after 3.0! Whatever you’re doing different now, please stick with it!
ARB_compatibility is absolutely necessary, and yes, on NVidia hardware at least, the pre-GL3 zombie will survive a long time. Other vendors don’t have to support ARB_compatibility so they’re not being held back. A lot of time and money has been invested in writing pre-GL3 code, expecting that will be thrown away just to use uniform buffers is unreasonable.
Backwards compatibility is one of the main wins for GL vs D3D, throwing away that advantage is a definite no-no.
Seeing how much of an improvement the Experimental C++ Bindings for OpenCL are compared to the C bindings. http://www.khronos.org/registry/cl/ (You need to be way less verbose).
It would be awesome if we had something like those for OpenGL3.
I’m must second this.
We develop medical device that heavily relies on OpenGL: volume rendering(ray casting/Slices), huge meshes, numerous points and lines. We have been refactoring the code to remove some deprecated stuff in anticipation for the next release, but all for vain(maybe not: it will help in the transition to DX), although we use Quadro FX4600(and above, cost many many $$$) the drivers are so shitty that you never know where is the problem.
We switched recently to a new driver/card and boom the application loses texturing or get stuck, replacing for an older card works, restore old driver works.
We need working OpenGL, I don’t mind loosing selection/wide lines/push-pop/Fixed pipeline etc for a working implementation.
I must be honest for months others are pushing DirectX and I tell them no, no, GL3+ will be great!!!.
Well it doesn’t seems so. I want two separate implementation: one for pre-gl3 and one for post gl-3. In OpenGL 3.1+ I don’t want any old stuff and I want drivers to be rock solid.
I’m staring to think Korval was right from the start, OpenGL is gone.
If something dramatically does not happens soon everyone will switch to DirectX: CAD/Medical and Military. Even CAD users need solid drivers(see AutoCAD).
Well, perhaps the vendors will be releasing something like `legacy drivers’ in the future? Say every once in a while they release a driver that includes the ARB_compatibility extension, but their newest lean and mean driver won’t contain it (like they do for old graphics cards, every once in a while a new legacy driver is released for old graphics cards, but the mainline drivers don’t have support for them). My guess is that the old code won’t need every newest driver anyway.
I think that could be a perfect way to phase out the pre-GL3 code and focus on an OpenGL 3.1 driver that is clean.
Personally I’m quite happy with the 3.1 release, I didn’t expect the deprecated stuff really to be removed from this release. The uniform buffer is also quite a nice feature. Besides that: 9 months after the previous release, who would have expected that after what happened with OpenGL 3.0?
My guess (hugely based on hope) is that we will see an OpenGL 4.0 version that will be the new rewritten API sooner than most expect. The OpenGL 3.x line is needed for transistion to a new API. They just couldn’t make that huge step at once.
No, its not. You want to use ARB_uniform_buffer_object in immediate mode? No problem, ARB_uniform_buffer_object written against the GL2.1 specs. Just open a GL2.1 context and use them together.
A lot of time and money has been invested in writing pre-GL3 code, expecting that will be thrown away just to use uniform buffers is unreasonable.
Just because GL3.1 came out, it doesn’t mean, the 2.x (and even 3.0) driver and functionality automatically goes away. Your existing software will function as always. If you want to stay with old funcionality, just request a 2.1 context. There’s absolutely no problem. But please don’t penalize those that want to upgrade their software to benefit from faster and more stable drivers.
Backwards compatibility is one of the main wins for GL vs D3D,
IMHO backwards compatibility has become the major drawback of OpenGL. But this has been discussed in lengths already…
IMO the distinction between making the geometry shader feature core or not would be more significant, if there were vendors that were avoiding implementation of the extension. i.e. if you as a developer wanted to use it but found out that a particular IHV had not implemented it, this could pose a real problem. But my understanding is that it is readily available on both AMD and NVIDIA implementations. So since the actual hard work of implementing it is complete, I would expect that extension to be around as long as the feature still exists in the hardware. Whether that interval is “forever” or “a couple of generations” I do not know.
There are no extensions for the geometry shader. Also:
Geometry Shader Texture Units: 0
Max Geometry Uniform Components: 0
Max Geometry Bindable Uniforms: 0
I think putting them into core would finally make AMD implement the geometry shader as well…
Then based on assumption that a feature may or may not exist in hardware, we would end up having no core, everything is extension,
unless GS is going under revising whether or not useful…
But with all these features deprecated, it makes it a bit more Direct3D 10…with more overhead and backward compatibility issues.
I think the idea of the ARB_compatibility extension is a good one. However, I think this extension should be revised to contain a new token for “enabling” compatibility. For instance:
// enable compatibility for the life of the application.
glEnable( GL_COMPATIBILITY_ARB );
If the extension is enabled, then old GL calls would work fine. If the extension is disabled, then the old GL calls would generate an error. Also, having header files that remove deprecated functionality would be a great thing!
My biggest concern with this particular extension has to do with the fact that modern code bases are large. If you’re porting an old code base to GL3.1, and you miss something, ARB_compatibility would make it “just work”, and you’d end up with a pretty serious bug in your program that could go completely unnoticed until your product has long since shipped to the masses. I’d like to be able to avoid these kinds of scenarios to ensure the maximum longevity of my applications.