Interest in specialized GL function loader.

Just one… very morbid thought: does your tool require to be maintained as new GL versions and extensions are added? Is it possible to tweak as follows:
[ol]
[li]Same nice command line options as now[/li][li]Takes as input a set of GL header files from which it generates the data. The idea is that glext.h (for example) has that functions for an extension foo are surrounded by #define GL_foo/#endif pair.[/li][/ol]

Though, I wonder about the horror of reused tokens and “dependent” extension functions (for example in an extension, “if GL_foo extension is supported then also the following functions for GL_bar are added: glBarFoo()” ).

At any rate, looking forward to futzing with this.

Just played around a bit on Linux. The first example can be generated with Lua 5.1 and can be compiled with warnings using GCC 4.7.2. Runtime tests follow. I filed multiple bug reports concerning diverse stuff.

Takes as input a set of GL header files from which it generates the data. The idea is that glext.h (for example) has that functions for an extension foo are surrounded by #define GL_foo/#endif pair.

If you or someone else would like to do that, you may. But I’m not interested in writing a parser for a header file, just to reverse engineer data that already exists in a usable form. It’s much easier to just run my existing process on the .spec files, then use the Lua outputs in this project.

Also, I don’t think that glext.h handles the whole core/compatibility thing, so you wouldn’t be able to produce a clean GL core header. And it’s missing all of the GL 1.1 enums and functions.

Though, I wonder about the horror of reused tokens and “dependent” extension functions (for example in an extension, “if GL_foo extension is supported then also the following functions for GL_bar are added: glBarFoo()” ).

There’s not much that can be done about that. Though I did put in some special code for loading core functions that if it detects that the “core function” has EXT on the end of it (such as “glTextureStorage1DEXT”, which is marked as version 4.2), then it won’t count it as a “missing” function if that function wasn’t found.

I filed multiple bug reports concerning diverse stuff.

I fixed a couple of those. The others will take a little time to resolve.

OK, I think I’ve fixed all of the outstanding bugs (those not entered by me). Also, I have a C++ style that does some nice namespace wrapping for everything.

So I’m calling this version 0.2. Take a look if you have the time.

Oh, and the documentation is a lot more complete. Also, I’ve documented the mechanism for creating your own writing styles, so that you can use the basic generation framework to generate whatever style of OpenGL loader you want.

I’m going to make a style that’s compatible with the Unofficial OpenGL SDK (that is, it uses function pointers loaded by that system), so that you can use it’s clean headers with them. I might do the same with GLEW, so that you can use it’s clean headers with GLEW-based loading. In both cases, the loading calls will be forwarded to the other library, while the basic stuff will be handled here.

OK, so version 0.3 is up, which is mostly just a bug-fix release. I’ve finished a GLee-style loader (ie: no having to call a specific function to load function pointers), but it’s not in 0.3 yet; it’ll be going into the next release.

The main issue is that… I’ve kinda created something more than just a loader generator.

The differences between 0.2 and 0.3 are fairly minor from the outside perspective, just some bug fixes. The real changes are internal.

Basically, in creating a flexible system for building loaders, I had to create a flexible system for arbitrarily processing the .spec files. Which means that you can use it to build anything from that data. You could generate Python bindings. Lua bindings. Java or C# bindings. Pretty much anything you wanted, all in a relatively few lines of code.

I’m not sure where to go with this from here. I’ve documented some of what you need to know to yoke the potential of it, but is this something that people will actually use? Should I be advertising it as a general processing system, or contacting the OpenTK/LWJGL/etc people to let them know that I’ve basically done half their jobs for them?

What can be done with this system? Is this something people even want to do?

Cool. I mean, really thats almost what i need right now :wink:
Now the sad part is, there are no spec files for EGL/GLES …

There are some databases for EGL and GL ES as part of Regal
located at https://github.com/p3/regal/tree/master/scripts

See egl.py and gl.py in particular.

  • Nigel

That sounds familiar! :slight_smile:

I think I ought to mention here that GLEW does have a “subset” mode (undocumented, and
in a separate branch) which allows for opting-in on an extension-by-extension basis. This
has been especially useful for statically linking small applications.

An alternative approach that I have in mind for GLEW and Regal is to implement a pseudo
pre-processor to take a vanilla/stock GLEW, Regal or whatever loader and strip out all
the irrelevances such as vendor extensions, things long merged into core, etc etc.
Somehow I find it a bit more appealing to have a proliferation of alternatives derivable
from a complete source distribution, than having the code generation scripts handle
all the scenarios.

Anyway, the main suggestion I have for GL loading is to support Core context well,
it’s a long-standing limitation of GLEW.

  • Nigel

Somehow I find it a bit more appealing to have a proliferation of alternatives derivable
from a complete source distribution, than having the code generation scripts handle
all the scenarios.

A code generator is much easier to maintain. You way, you have to maintain a code generation system (to generate the original source) and this pre-processing system that removes the extra stuff.

Also, it’s very easy to screw up the pre-processing system due to a simple lack of the proper information. For example, how do you decide to include the enums for ARB_tessellation_shader? Remember: it’s a 4.0 core extension, so it’s very much part of 4.0+. But it’s also an extension.

The GLEW headers, for example, consider it an extension; it lives under a comment that says “GL_ARB_tessellation_shader”. No mention is made of the fact that it’s also core 4.0+ functionality and should be included in any headers that use 4.0+ regardless of whether the user asks for it or not. Without knowledge of what is and is not a core extension, your ability to get clean headers from 3.0+ with this system is diminished.

The OpenGL spec files, the source that the GLEW headers are built from, actually has this info. Enums from ARB_tessellation_shader are specifically stated to come from the extension and the version.

GLEW headers are a lossy code generation system. Generating code from such a system is probably not the best idea.

Granted, the .spec files alone don’t have enough information to do this properly either. Enums are correctly classified, but functions are not. Which is why my loader systems have to keep a manually-updated list of what the core OpenGL extensions are.

Even so, it’s still a lot easier to make this stuff from first principles than to parse someone else’s headers.

Anyway, the main suggestion I have for GL loading is to support Core context well,
it’s a long-standing limitation of GLEW.

GLEW is pretty much the only extension loader still supported today that doesn’t support core contexts. All of my styles handle core contexts.

I’d say - that depends. What we’re finding with Regal are a whole lot of use-cases
that are fairly platform specific, along with the general desire to slim things in
various ways. Diagnostics in debug mode, for example.

You’re right - it depends on having things sufficiently #ifdef-ed for preprocessing to work.
With GLEW, for example, I’d rather merge the subset patch to the mainline and preprocess
it away again, for release. Rather than complicate the code generation with the conditionals.

Regal already has a bunch of compile-time options, so pre-processing just ought to be equivalent
to whatever would have been specified for the compiler.

  • Nigel