Official feedback on OpenGL 3.1 thread

The Khronos™ Group announced today it has publicly released the OpenGL® 3.1 specification that modernizes and streamlines the cross-platform, royalty-free API for 3D graphics. OpenGL 3.1 includes GLSL™ 1.40, a new version of the OpenGL shading language, and provides enhanced access to the latest generation of programmable graphics hardware through improved programmability, more efficient vertex processing, expanded texturing functionality and increased buffer management flexibility.

OpenGL 3.1 leverages the evolutionary model introduced in OpenGL 3.0 to dramatically streamline the API for simpler and more efficient software development, and accelerates the ongoing convergence with the widely available OpenGL ES mobile and embedded 3D API to unify application development. The OpenGL 3.1 specification enables developers to leverage state-of-the-art graphics hardware available on a significant number of installed GPUs across all desktop operating systems. According to Dr. Jon Peddie of Jon Peddie Research, a leading graphics market analyst in California, the installed base of graphics hardware that will support OpenGL 3.1 exceeds 100 million units. OpenGL 3.0 drivers are already shipping on AMD, NVIDIA and S3 GPUs.

Concurrently with the release of the OpenGL 3.1 specification, the OpenGL ARB has released an optional compatibility extension that enables application developers to access the OpenGL 1.X/OpenGL 2.X functionality removed in OpenGL 3.1, ensuring full backwards compatibility for applications that require it.

OpenGL 3.1 introduces a broad range of significant new features including:

[ul][li]Texture Buffer Objects - a new texture type that holds a one-dimensional array of texels of a specified format, enabling extremely large arrays to be accessed by a shader, vital for a wide variety of GPU compute applications;[]Signed Normalized Textures - new integer texture formats that represent a value in the range [-1.0,1.0];[]Uniform Buffer Objects - enables rapid swapping of blocks of uniforms for flexible pipeline control, rapid updating of uniform values and sharing of uniform values across program objects;[]More samplers - now at least 16 texture image units must be accessible to vertex shaders in addition to the 16 already guaranteed to be accessible to fragment shaders;[]Primitive Restart - to easily restart an executing primitive - to efficiently draw a mesh with many triangle strips for example;[]Instancing - the ability to draw objects multiple times by re-using vertex data to reduce duplicated data and number of API calls;[]CopyBuffer API - accelerated copies from one buffer object to another, useful for many applications including those that share buffers with OpenCL™ 1.0 for advanced visual computing applications.[/ul] [/li]


Unexpected release … so far so good, I was waiting for uniform buffers, I’m glade it’s here! Well still waiting for the actual specifications now …

It’s not much but if the idea is to release every 6 months … Youhou

The specifications are now available in the OGL registry:

i just looked over the specs. very nice and unexpected clean ;). if OpenGL 3.2 gets the direct state access stuff it will be great.

great work!

p.s. one question to the UBO extension: is it possible to have multible uniform buffers at once?

uniform lights

uniform material

this is a question i got from the first fast look at the spec. how does this look like on the host side with multiple UBOs for the buffers?

How does the ARB_compatibility extension work?

If the extension is supported, a 3.1 context has all the deprecated stuff still there anyway? Or does an application need to explicitly request backwards compatibility somehow?

> If the extension is supported, a 3.1 context has all the deprecated stuff still there anyway?

That is correct. Just check for GL_ARB_compatibility in the extension string if you need to use any of the deprecated features.

this seems odd. so if GL_ARB_compatibility is in the string all functionality is still there? wouldn’t it be more logical to use a backward compatibility flag during context creation like with the forward compatibility flag for OpenGL 3.0?

Should be completely doable but there are hardware limits to be aware of. Also, not that UBO extension was written so that it could apply to some pre-GL3 hardware as well such as Radeon X1000 and GeForce 7 - though availability of the ext on those parts is up to the vendor to decide. On those parts you could quite likely be limited to a single UBO bound per draw - this is my initial guess. But on GL3 hardware the limits are higher… 16 I think ?

wouldn’t it be more logical to use a backward compatibility flag during context creation like with the forward compatibility flag for OpenGL 3.0?

Just a different way of doing it I suppose. But the deprecation model as written includes the concept of outgoing features being pulled back out to extension land. I have no prediction on which vendors will offer that ext and for how long. Ultimately it’s going to be developer / app uptake that will affect the lifetime of outgoing extensions in the market.

Note that this process could occur again in the future, so the context-creation-flag approach might not scale as well. People know and understand the extension model already, it’s just being used in a new way here.

Rob, this looks like a fine release. Uniform buffers are gold.

I now only have two big beefs left with OpenGL:

  • Can you please push HARD for decoupling vertex and fragment shaders, so that you don’t need to explicitly link them together and can mix & match as needed, like with ARB_programs or in DX?
  • Full direct state access for all 3.1 features (and commitment to keep supporting it for every future feature, eventually phasing out indirect state access) would be awesome. Bind-to-modify is such a ridiculously horrible idea that it’s absolutely unbelievable that it’s still with us in 2009.

Oh yeah, and texture filtering state should be per sampler, not per texture. But you’ve heard that one to death and it’s likely not a big performance win.


UBOs were much needed and direct state access is what OpenGL really needs.

Can’t make any specific promises since this is a group effort and it is too soon to say really, but many of the suggestions on the last couple of posts carry some noticeable weight behind them in terms of “working group interest level” for the next major revision. That said we are trying to stay schedule driven and we might start out a plan with 5 major things on the list and then ship with 4 in order to avoid extensive schedule creep, so that’s why we don’t carve it in stone here…

UBO was one that caused a bit of schedule elongation on the 3.1 release but we can see that it was worth the slight added wait.

Will pull down and read the specs today, but the bullet points above seem to indicate you’ve covered most of the things I am immediately concerned about. Thanks.

Just have to wait for Apple to roll it out then… :wink:

Haven’t got the chance to download the specs yet, but what about the fixed function pipeline? Is it still in there or have it been eventually removed?

Nice work! I hope vendors will make us happy with their good and stable drivers (which behave identically) eventually. At least for 3.1 part. And maybe… this thing will return OpenGL to game development area.

I’m the guy who cares about games on linux (well, I hope to see them in future) and without good competing GAPI it’s not even possible. I mean the obvious way it will happen is - make a game for window$ and port it to linux/macos since API is OpenGL anyway. In other words, I hope we will see games for window$ on top of OpenGL in future.

Once again, great job!

Nvidia already released 3.1 drivers, nice work !

Supposed to be OpenGL 3.1 drivers but it’s definitely not yet.
Anyway, drivers are on their ways for both nVidia and ATI.

Nice one - agreed with ector though: direct state access in 3.2 pls.

Good stuff.

From :

This driver implements all of GLSL 1.30 and all of OpenGL 3.0, and all of OpenGL 3.1 and GLSL 1.40, except for the following functionality:

* The std140, column_major and row_major layout qualifiers in GLSL 1.40
* The API call BindBufferRange() ignores the <offset> and <size> parameters. In other words, BindBufferRange() behaves the same as BindBufferBase()

Can’t test on my oldish card, but this sounds already quite complete.

Just dropped by out of self-inflicted exile to say: GOOD JOB, KHRONOS