GL_EXTENSIONS replacement

In the GL 3.0 spec it says that GL_EXTENSIONS is deprecated as an argument to glGetString which I guess means you just try to get a function pointer and if it’s NULL then it’s not supported.

However, if you wanted to get a list of all supported extensions without using deprecated functionality, how would you do it?

GLint n, i;
glGetIntegerv(GL_NUM_EXTENSIONS, &n);
for (i = 0; i < n; i++) {
", glGetStringi(GL_EXTENSIONS, i);

glGetString() is not deprecated, only the GL_EXTENSIONS argument to it is. There’s a lot of history with people using fixed-size buffers to copy the merged extension string into and having that break when someone releases a new driver, which is what we’re hoping to alleviate in the long term here.

Thanks! I didn’t know glGetStringi existed! I must read the spec more carefully :slight_smile:

I don’t suppose it’s possible to get an extension that can be used to retrieve the capabilities of OpenGL?

Preferably with some way of linking the capabilities with the extension they belong to.

eg. Querying

GetIntegeri_v(GL_EXTENSION_CAPABILITIES, extensionIndex, &capabilities)

would return a list of all the capabilities of that extension

ie. When glGetStringi(GL_EXTENSIONS, i) returns “GL_ARB_draw_buffers”
GetIntegeri_v(GL_EXTENSION_CAPABILITIES, i, &capabilities) returns

And when glGetStringi(GL_EXTENSIONS, i) returns “GL_ARB_geometry_shader4” then
GetIntegeri_v(GL_EXTENSION_CAPABILITIES, i, &capabilities) returns


These values could then be plugged into glGetInteger to get the actual values.

A way of retrieving core capabilities as well as extensions would be nice too, so each time a driver supports a new version of OpenGL, or changes extensions supported, then existing programs that display capabilities will just work without needing to be re-compiled.


Why would you ever need that? You’re talking about things that you can already retrieve with the appropriate enums. There’s no need for an ill-defined operation like GL_EXTENSION_CAPABILITIES when you can do that yourself.

sure, if you want to have to update your application every time a new extension comes out.

This new functional would mean that you won’t need to re-write code to allow users to see new capabilities of their graphics cards, that have been added since you wrote the program.

Currently, you need to wait for makers of programs like:

to update their database before being able to see limitations of your hardware.

sure, if you want to have to update your application every time a new extension comes out.

You’d have to do it with your suggestion anyway. In the example you game, GL_ARB_draw_buffers has 1 integer worth of data, while GL_ARB_geometry_shader4 has 6 integers of data. There is no theroetical maximum size of data.

Furthermore, let’s say you do this:

int iSize = 0;

  int *pDataArray = new int[iSize];
  GetIntegeri_v(GL_EXTENSION_CAPABILITIES, i, pDataArray);

The size getting lets you get past the problem I mentioned. However, for any extension “i”, do you have any idea what pDataArray[0] actually means? Without having intimate knowledge of what extension “i” is (ie, needing to “update your application every time a new extension comes out”), you have no idea what any of the data in the array means.

In short, the entire exercise is fundamentally meaningless.

This is the one depreciation i really dont agree with.

In every other case you are removing functionality that could be helpfull for beginner programmers to write simple programs with, and only keeping the fast path.
In this case you are replacing the current fastest path with a function designed for students still learning basic programming.

I honestly cant imagine why anyone would want to make a copy of the entire extensions string before scanning it, and if they did then why not just use the standard string-copy routines.

glGetStringi(GL_EXTENSIONS, i); can be taught to beginners in SDK tutorials as the way to access extensions so they dont get it wrong, but the existing string should remain for professional programmers.
Or The planned SDK could provide the source code for a simple routine to scan the extensions string, that they could just copy into their own code.

The extension string also doesn’t need to be as long as it is at the moment because the program will tell the driver which version of OpenGL it understands when it creates the context.
The driver only needs to return a list of extensions that are not core in that version.

If you really want to return extensions in a form useful to professionals and easy for beginners to use then just return two sets (bit arrays), one for “ARB Extensions by number” and one for “Vendor and EXT Extensions by number”.
Then you dont need to read extension names at all, just check if a bit is set (unless you want to use an experimental extension that has not been assigned an extension number yet).
The problem of buffer size differences is solved by telling OpenGL how big your buffer is, or the highest extension number you know or care about, so if your program only understands 353 vendor extensions then it creates a 45 byte bit array and asks the driver to copy to it the 45 bytes of its internal bit array that corresponds to extension 1 to 353.

Yeah, it’s unusable in a program for checking against anything, since as you said, the data is meaningless unless you know what it means.

It isn’t intended for use within normal program code, but it allows you to write a program that displays extensions and their limits that will still be up-to-date if you run it in 10+ years time. This is my main motivation, since I’m going to need to update an OpenGL info screen soon (for GLScene - OpenGL solution for Delphi) that hasn’t been updated for a few years, and hence doesn’t have any of the GLSL info displayed on it. However, after re-writing it will quickly get out of date again, missing important information, as new shader types become available (or maybe “break” as core functionality gets deprecated).

I forgot something, and that would be to also return a string of the capability name (either “GL_MAX_DRAW_BUFFERS”, or a more friendly “Maximum draw buffers”), and may be better to return the whole lot as strings, rather than integers.

“Maximum 3D texture size”, “2048 x 2048 x 2048”
“Line width range”, “0.5 to 10.0”
“Recursive tesselation limit”, “8”
“Ray-trace bounces allowed”, “8”

glGetString(GL_EXTENSIONS) returning
“GL_ARB_color_buffer_float GL_ARB_depth_texture GL_ARB_draw_buffers GL_ARB_draw_instanced GL_ARB_fragment_program GL_ARB_fragment_program_shadow GL_ARB_fragment_shader GL_ARB_half_float_pixel GL_ARB_half_float_vertex GL_ARB_framebuffer_object GL_ARB_geometry_shader4 etc etc x 10 times the size”

Returning a massive list of extensions like this kinda offends me!

How is this meant to scale up in a parallel way in a many-core environment? splitting the string up into several chunks + processing it this way? seems clumsy.

Whereas, you can have much easier parallel handling of:
ParallelFor(i, min=0, max=NUM_EXTENSIONS, stride=1)
Process(glGetString(GL_EXTENSIONS, i));

ps. Actually performance concerns are probably irrelevant, since you’re only likely to look these up once (per RC).
But having them indexed does allow for the possiblity to look up extra information about the extension using the same number. Max capabilities, whether extension is marked as deprecated, or anything else that may be useful.

Aren’t you all taking this too seriously? The argument about multi-core processing of the string really tops all the multi-core/Cuda/Larrabee non-sense that i have heard lately (i assume it was meant as a joke though).

But in the end, this is only about one tiny feature that doesn’t change anything, no matter how it is implemented.


Friends, let us bid the extension string a fond farewell. May it rest peacefully in deprecation purgatory.

I didn’t know that multithreaded extension string processing was important to someone :slight_smile:
Are you searching the string every time you render an object or something?

Heh, probably started talking about this in wrong thread, don’t really care whether glGetString or glGetStringi is used to find extensions, but the way glGetStringi is used to retrieve extensions gave me the idea about how to solve the problem of outdated extension viewer apps, which I believe would add something to OpenGL, take up very little space in the driver (lots of extensions have no extra info needed), and have no effect on runtime performance.

Aside (since you asked if I was checking every time I render an object :slight_smile: ), GLScene (and GLEE from what I’ve seen of it?) uses global variables to check whether extensions are available, but this is gonna cause some problems when you can create different contexts with wglCreateContext that will support different extensions (and you use 2 different kinds of contexts in the same app). The global variables (at least the ones of interest, or all in a general library) will either have to be rechecked every time you switch to another context that has different extensions(slow), or not stored as global variables(more data in use).

if (GL_ARB_draw_buffers and GL_ARB_draw_instanced) then
//do stuff

may need to change to something like

if (MyCurrentRCObj.GL_ARB_draw_buffers and MyCurrentRCObj.GL_ARB_draw_instanced) then
//do stuff

Interestingly, it seems we’re left to our own devices with the WGL extension string. That should make at least (if not exactly) one of us happy :wink: