Official feedback on OpenGL 4.5 thread


The Khronos Group, an open consortium of leading hardware and software companies, today announced growing industry support for the OpenGL family of 3D standards that are advancing the visual experience for more than two billion mobile devices and PCs sold each year. OpenGL, OpenGL ES and WebGL are the world’s most widely deployed APIs that between them provide portable access to graphics and compute capabilities across multiple platforms, including Android, iOS, Linux, OS X, Windows and the Web.

OpenGL 4.5 Specification Released
Khronos publicly released the OpenGL 4.5 specification today, bringing the very latest functionality to the industry’s most advanced 3D graphics API while maintaining full backwards compatibility, enabling applications to incrementally use new features. The full specification and reference materials are available for immediate download from the OpenGL Registry. New functionality in the core OpenGL 4.5 specification includes:

Direct State Access (DSA) : object accessors enable state to be queried and modified without binding objects to contexts, for increased application and middleware efficiency and flexibility;

Flush Control : applications can control flushing of pending commands before context switching – enabling high-performance multithreaded applications;

Robustness : providing a secure platform for applications such as WebGL browsers, including preventing a GPU reset affecting any other running applications;

OpenGL ES 3.1 API and shader compatibility : to enable the easy development and execution of the latest OpenGL ES applications on desktop systems;

DX11 emulation features : for easier porting of applications between OpenGL and Direct3D.

OpenGL Registry
OpenGL 4.5 Reference Card


Direct State Access (DSA)

I can’t believe it … How awesome is that?


VertexArrayElementBuffer(uint vaobj, uint buffer)

Will be much better to have “intptr offset” as third parameter, like in VertexArrayVertexBuffer. Yes, I understand that it is possible to make offset in element buffer using "const void indices" and “base vertex” in Draw commands. But this is VERY complex solution, this is non-native. For example, I have 128Mb uber-buffer where I store all my VBs and IBs. In this case drawcall managment and debugging is really difficult because I have very long offsets. Another example - I store UINT IBs, USHORT IBs and all VBs in one uber buffer, so I can damage my brain when computing offsets for indices in that buffer, where UINT and USHORT IBs can be in random order.


Typo in GL_ARB_direct_state_access: Example 3 - Creating a vertex array object without polluting the OpenGL states

in “// Direct State Access”
Line 3147: glEnableVertexAttribArray should be glEnableVertexArrayAttrib


I have a question regarding the new NG api. I know the api itself will break backwards compatability but will this also be the case with GLSL? Will GLSL shaders still run on the new API?


I was wondering if the “next generation” api will hold any compatibility with GLSL shading language?



there seems to be a few bugs in the 4.5 parts of the gl.xml, glcorearb.h, glext.h and the ‘OpenGL 4 Reference Page’. The ‘size’ argument of multiple functions is different (GLsizei instead of GLsizeiptr) to what the standard demands. e.g. glNamedBufferStorage, glNamedBufferData, glCopyNamedBufferSubData, glTransformFeedbackBufferRange, glClearNamedBufferSubData, glGetNamedBufferSubData, …



I’m rather surprised to see the complete lack of reactions under this announcement. Previous OpenGL versions were greeted by a flurry of messages, people going through the spec and pointing out likes and dislikes. This version brings the long-requested feature of DSA - and yet nobody seems to care. How come?


Well, DSA was abandoned for years, and now out of nowhere it is suddenly revived. At the exact same time it is announced that OpenGL is going to be re-built from the ground up. This of course isn’t the first time we’ve heard this story before. Frankly I think people don’t know what to expect at this point.


I suspect that most people just bit the bullet and used GL_EXT_direct_state_access anyway. I know that id Software did (link) and Valve have a mention of it in one of their slides too. The fact that it was so widely supported made this something safe and easy enough to do.

It’s also the case that many recent features had a DSA API from the outset (sampler objects), or a DSA API was unnecessary (vertex attrib binding), they were specified in such a way that bind-to-draw has no (or minimal) interference with bind-to-edit/create (multi bind) or even they were a part of DSA brought into core already (glProgramUniform calls). So full DSA had become largely unnecessary except in certain very specific cases.

Finally, we all know how long it takes both AMD and Intel to get new drivers supporting new GL_VERSIONs out. GL 4.5 and DSA means nothing until we get comprehensive widespread driver support; until then it’s best suited to tech demos and private projects on a single vendor’s hardware.


That was, unfortunately, a bug in the forum. Our webmaster fixed it now, obviously. Sorry about that!



Looking at the reference pages (, the following prototypes strike me as incorrect:

void glProgramUniform2ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLuint v1);

void glProgramUniform3ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLuint v2);

void glProgramUniform4ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLint v2, // surely GLuint?
GLuint v3);


[QUOTE=H. Guijt;1261954]Looking at the reference pages (, the following prototypes strike me as incorrect:

void glProgramUniform2ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLuint v1);

void glProgramUniform3ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLuint v2);

void glProgramUniform4ui( GLuint program,
GLint location,
GLint v0, // surely GLuint?
GLint v1, // surely GLuint?
GLint v2, // surely GLuint?
GLuint v3);[/QUOTE]

If you feel it’s incorrect, can you open a bug report at



It’s clearly incorrect. I opened a report on the bugzilla about a mistake in the man pages and it has gone unnoticed for the better part of a year.


Thanks for bringing this to my attention, and I fully understand your frustration. Would you happen to have the URL to the bug. I’m forwarding these items directly to the work group.

Thanks for your extended patience.




The 4.5 version was here for a while, but it seems like nVidia does not hassle to support it in a variety of it’s hardware. I have a GeForce GT 520 videocard and OpenGL 4.4 is still a top-version supported with the newest drivers (version 344.75). What bothers me most is that the GL_ARB_clip_control is not supported (the other extensions brought by OpenGL 4.5 are awesome to have as well, but this particular one is mostly critical for me). I know it shouldn’t matter for a cross-platform library like OpenGL, but “just in case” I will mention that I sit on WinXP32. So what should I do? Wait some more time (but how long and is there a guarantee?..) or buy a new videocard?


This is certainly not a proper section for such kind of questions. Everything that is driver-specific should be in OpenGL drivers section.

OpenGL 4.5 is supported in NVIDIA beta drivers for Windows (since you are using XP) ver. 340.65, 340.76 and 340.82.
GL_ARB_clip_control was supported by 340.65 (I didn’t test later ones). If you want to play with beta drivers download and try some of the mentioned above.

The latest release drivers still support “only” OpenGL 4.4.


Sorry for offtop. Moved here.


The Third Annual Unofficial OpenGL Feature Awards!

And you thought I’d forgotten about you :wink: On to the awards!

We Did What We Said We Were Gonna Award:


I’m not talking about the meat of the extension. I’m talking about the part of it that’s actually new: the fact that we finally get a function that merges name creation and object creation. One of the sillier bits of OpenGL was making these separate, but that was as a consequence of an even sillier bit of OpenGL: giving users the ability to decide what names were for themselves.

Long’s Peak promised a new object creation paradigm. And while LP was going to make virtually every object immutable, ARB_DSA builds a new object creation system that makes name and object creation synonymous. Just like sampler objects.

Which means that we finally have what Long’s Peak promised us (well, most of it). Now, isn’t OpenGL 4.5 such a much cleaner API for it? With our glorious Long’s Peak API, there are only two ways to do the same thing now. That’s progress.

And it only took them 8 years from the initial Long’s Peak announcement for us to get it :wink:

[STRIKE]One[/STRIKE]Two Little Mistakes Award:


I hate to quibble about naming conventions and the like. But it truly amazes me how the ARB could debate the naming issue as extensively as it appears they did from the issues section. And with all of that debate and all of the possibilities before them, they still choose the worst possible alternative.

Couldn’t you guys have just agreed to flip a coin, one naming convention or the other? Sure, some APIs would have unwieldy names. I certainly don’t care much for the idea of having [var]glNamedCompressedTexSubImage1D[/var] or [var]glRenderbufferObjectStorageMultisample[/var]. But at least it would have been a convention; the unwieldyness of some function names would have been defensible by convention. With your way, not only do you have unwieldy names ([var]glInvalidateNamedFramebufferSubData[/var]), you can’t even justify it by citing a convention.

Also, the difference between “Named” and “Object” is exactly one character. The difference between “Named” and “exture” is exactly one character. The difference between “Named” and “Array” is zero characters.

So yeah, I don’t see how the non-“Named” APIs are so much better than the “Named” ones that they had to break convention.

Also: [var]glNamedBufferData[/var]. What were you thinking with that one? [var]glBufferStorage[/var] made [var]glBufferData[/var] completely obsolete. Everyone should always be creating immutable storage buffers. And you did it right for textures; you didn’t allow people to use the new API to create non-immutable textures. So why did you think this was a good idea?

Tail Wagging The Dog Award:


OpenGL ES exposed hardware, legitimate hardware, features that GL 4.4 did not. That’s something of an indictment of the ARB, that they somehow missed having imageAtomicExchange work on single-channel f32 images since 4.2. Which was in fact three years ago.

Well, hindsight is always 20/20. Then again, very few people know that Prometheus, the Greek Titan of foresight, had a brother named Epimetheus, the Titan of hindsight. There’s a reason there aren’t great tales of his exploits.

Let’s Make Geometry Shaders Even More Useless Award:


GS’s were a terrible idea. Originally envisioned as a means of tessellation, it turns out that they were terrible at that. So terrible in fact that they had to create two entirely new programmable stages and a fixed-function stage specifically to make tessellation worthwhile.

But that was OK, because GS’s gave developers the ability to write arbitrary data to buffer objects via transform feedback. So you could issue “rendering commands” that didn’t render, but were used to do things like frustum culling and the like. Oh but wait, 4.2 gave us compute shaders to make that completely irrelevant. CS’s can do all of that, and without the nonsensical faux rendering command.

But even that was OK. Because GS’s could still do something that no other shader stage could: cull triangles. Yes, the TCS could cull patches (little known fact: if any outer tessellation level used by the TES is zero, then the patch is culled), but it could only cull whole patches at a time. It took a GS to cull things at the triangle level.

No longer.

ARB, if you’re going to kill off GS’s, just do it already. Give us AMD_vertex_shader_layer/viewport_index (obviously with the ability to set them from the TES too). That would completely nullify any point to that worthless, poorly-named, Microsoftian addition to the rendering pipeline. Or if we can’t have that, at least give us NV_geometry_shader_passthrough, so that GS’s can be focused on the one useful thing they can still do.

Where’s The Beef Award:

OpenGL 4.5

Did ARB_DSA really deserve having a full point-release for it? Because the features exposed in 4.5 are pretty scant. That’s not to say that nothing of substance was released. I’m sure that people will find uses for primitive culling and ES 3.1 compatibility is important (particularly since it had more features than GL 4.4, and we can’t have that). But very little of substance was actually standardized. Especially considering that there is much functionality that is available, across hardware vendors, which is worth standardizing.

Like sparse_textures/buffers. And so forth.

I understand that you kinda want GL 4.x to be implementable on all D3D 11-class hardware. But why exactly is that so important? Especially now.

4.5’s features compared to 4.4 were minor, bordering on non-existent. Plus, 4.4 was quite complete in terms of D3D 11.2-level features; there wasn’t much we were missing. So why not simply let the “backwards compatibility extensions” (the ones without ARB suffixes) allow hardware that couldn’t do sparse stuff to implement what they could, and give us a 4.5 with some real meat?

We Need More Ways To Do Things Award:
Let’s Rewrite Our API And Still Leave The Original Award:


Because that’s what OpenGL needs more of: ways to do something we can already do.

But at least it shows that the ARB understands timing. They know that the best time to introduce a feature that renders a healthy portion of the existing API superfluous… is right before you replace it all with an entirely new API.

That’s the kind of timing that has placed the OpenGL API in the marketplace position it currently enjoys :wink: