Error notifications

I would love to be able to register a callback that gets called when the error variable is set. Not having to sprinkle glError() calls throughout code to catch a run-away GL error deep in a rendering tree would be very convenient. This way, I could break in the debugger and look at the stack trace.

OpenGL doesn’t do callbacks, partially because that’s really hard to support across multiple languages. Your case is also hard because of the pipelined nature of the hardware (i.e. before the chip notices something is wrong you’ve already called a bunch of other OpenGL commands).

It’s also not really necessary. If you want to find out when and where you throw errors use a debugger like gDEBugger or glIntercept or some other OpenGL stream tracer that can check the error flags after each command automatically.

OpenGL doesn’t do callbacks, partially because that’s really hard to support across multiple languages.
The GL spec is defined for C. That there have been ports to other languages doesn’t have anything to do with the definition of OpenGL being bound to C. As such, it’s on the porters from C to their language to find a way to substitute C function pointers for their own data structures. After all, Java doesn’t have pointers, but the GL spec is based on them in several places. That hasn’t stopped Java ports.

Your case is also hard because of the pipelined nature of the hardware (i.e. before the chip notices something is wrong you’ve already called a bunch of other OpenGL commands).
In general, GL errors are not thrown by the chip, but by the driver. It knows what the current state is, and it deals with it as such. So, most errors are thrown during a function call. I’m fairly certain the spec even disallows async error firing (but I’m not sure).

It’s also not really necessary. If you want to find out when and where you throw errors use a debugger like gDEBugger or glIntercept or some other OpenGL stream tracer that can check the error flags after each command automatically.
Shouldn’t OpenGL stand alone? I mean, why should a library depend so heavily on some other library for debugging functionality.

Originally posted by IvanMilles:
I would love to be able to register a callback that gets called when the error variable is set. Not having to sprinkle glError() calls throughout code to catch a run-away GL error deep in a rendering tree would be very convenient. This way, I could break in the debugger and look at the stack trace.
If you use C or C++ I suggest you just use assert(glGetError()==0). This is a nop in release builds and in debug builds does exactly what you want.
Don’t know about other languages, but I suppose it’s useful enough to be a widespread language feature.

Originally posted by Korval:
I’m fairly certain the spec even disallows async error firing (but I’m not sure).
I’m sure it’s not allowed.
(and I don’t think it would help anyone if it were allowed but that’s a bit OT here)

OpenGL is designed, intetionally, to be a “push” API. That is, you can execute commands without having to worry, for the most part, about synchronization or return results. This allows, for example, the GL to buffer many commands without executing them. This is why you have commands like glFlush and glFinish. It also allows efficient implementation of networked rendering (i.e., GLX).

In fact, applications can be written such that they never need to rely on a result coming back from the GL. This is why applications are allowed to specify their own object IDs instead of being required to get them from the GL. This makes it possible to write debuggers or other applications that capture and replay streams of GL commands. There are some exceptions to this (e.g., vertex arrays), but none of them fundamentally violate this design decision.

I think that it is extremely unlikely that any mechanism, such as a callback, that fundamentally breaks this design philosophy will ever be adopted. Although, stranger things have happened.

If you use C or C++ I suggest you just use assert(glGetError()==0). This is a nop in release builds and in debug builds does exactly what you want.
First, this is not terribly transparent. It’s something you have to put after every OpenGL command.

Second, even when it asserts, you still don’t know what error it got.

Being able to register a callback for errors would be a good idea.

In fact, applications can be written such that they never need to rely on a result coming back from the GL.
Really? Need we point out ARB_FBO and CheckFramebufferStatus? Or, how about the results from various shader compilation/linking?

[quote]I think that it is extremely unlikely that any mechanism, such as a callback, that fundamentally breaks this design philosophy will ever be adopted./quote]

Despite how useful it would be to a GL developer? API’s exist to serve the user; this must be the principle design goal of any API.

Need we point out ARB_FBO and CheckFramebufferStatus? Or, how about the results from various shader compilation/linking?
Obviously, not all possible applications can be written this way. Consider a function closer to OpenGL 1.0: glReadPixels().

What idr says is that some applications can be designed this way.

What idr says is that some applications can be designed this way.
The real point is that this doesn’t violate anything that OpenGL depends on. It doesn’t break GLX or networked rendering. It is the functional equivalent of wrapping every GL call with a check to see if the call produced an error, and if it did, calling a callback.

And if an application is designed not to care about return values, fine; they don’t have to use the callback, just as they don’t have to use glReadPixels or CheckFramebufferStatus, or even glGetError().

Originally posted by Korval:
As such, it’s on the porters from C to their language to find a way to substitute C function pointers for their own data structures.

Error handling is nasty in any language. I think the OpenGL approach is the lowest common denominator, and I’m happy with that.


In general, GL errors are not thrown by the chip, but by the driver. It knows what the current state is, and it deals with it as such. So, most errors are thrown during a function call. I’m fairly certain the spec even disallows async error firing (but I’m not sure).

That’s the nice thing about OpenGL. Errors don’t fire, they are just noticed. Unless the app picks them up, they can be as asynchronous as they want. I would never expect glGetError to be a cheap operation.

Shouldn’t OpenGL stand alone? I mean, why should a library depend so heavily on some other library for debugging functionality.
Well, you can debug OpenGL as is. It might not be as convenient as you would like to, but I would also prefer have a built-in stack trace in the run-time library so that I don’t just get a core dump but a nice stack trace with offending code etc. It’s just that you don’t need this overhead for a working program, and to handle non-working programs is the domain of debuggers, IMHO.

Originally posted by Korval:
[b]First, this is not terribly transparent. It’s something you have to put after every OpenGL command.

Second, even when it asserts, you still don’t know what error it got.

Being able to register a callback for errors would be a good idea.[/b]
I’m not sure about that.

IvanMilles asked for something that would break and land him in the debugger, and assert can do that. I suppose the goal for this is easing development. I do check for GL errors in lots of places in debug builds, but barring GL_OUT_OF_MEMORY (which never happened to me so far), release builds IMO should not cause any GL errors.

If you build your own convenience functions around glGetError, you can easily get your error code and translate it to text, log it to a file along with a description of where the error occured, throw a message box or whatever else you’d like. And then you might assert(0), as a convenient way to get into the debugger after you’ve determined that something just went awfully wrong.

This all works. It slightly reduces debug build performance … I don’t mind.

The last API I remember that had error callbacks was Glide, and there it was specifically not even working in end-user drivers. You had to get a special debug driver to use it. The idea probably was that bugs should be found and fixed during development.

The real point is that this doesn’t violate anything that OpenGL depends on. It doesn’t break GLX or networked rendering. It is the functional equivalent of wrapping every GL call with a check to see if the call produced an error, and if it did, calling a callback.
You need to think more clearly about what would be required to implement this in a client-server situation. In GLX, the client-side library batches many commands together and sends them to the server. It sends the commands when either the buffer is full or a command is issued that requires a reply (e.g., a “get” command, glFlush, glFinish, etc.).

The server has no way to directly deliver a callback to the application. The only thing that can deliver a callback is the client-side library. In order for that to happen, the client-side library would have to:

[ol][li] Flush every individual command.[] Ask the server, after each command, "Should I deliver a callback?[] After receiving a reply from the server, conditionally deliver the callback to the client.[/ol][/li]Basically what you have is a call to glGetError after every GL command. If that’s what is desired, then put code in for debug builds that calls glGetError after either every command or after certain groups of commands. As a previous poster suggested, a properly written application should never need to call glGetError. It exists solely as a development aid.

You are correct. It is the equivalent of wrapping every GL call with a call to glGetError. However, for the reasons explained above, that fundamentally goes against the design of OpenGL.

An IHV or operating system vendor could, however, develop a “debug” library that has this type of functionality. Since this is not something that would ever be used in a production build, it’s not something that would ever be part of the OpenGL standard.

In fact, I plan to look into doing this for the next major release of the X.org X-Windows server.

Not quite on topic, but:
Can someone tell me what to do, if an opengl implementation throws an exception (SEH) instead just setting an error for glGetError? E.g. the ATI Catalyst throws an exception in glBindTexture if the virtual memory runs low/out. Afterwards the render-context is invalid, so a try-cacht-block dosn’t help very far. Not to mention, that it violates specification.

Or simply take one of the extension loading libs and manipulate the code generation

For debug builds, create a function which calls the original function pointer (you can also get those for OpenGL 1.1 core functions) check, for glGetError and then shows e.g. a message box or throw an exception. With proper #defines you can also use macros like FILE and LINE.

As all if this code would be generated, it would be easy to do. You could even skip often called functions like glVertex, which shouldn’t fail at all and use them unchecked.

Or you could #define the check as a { … } block where you could call the Win32 API function DebugBreak to jump there.

IMHO, with a little work done, no extension is necessary.

#if DEBUG
#define glColor3f(r,g,b) _checked_glColor3f(__FILE__,__LINE__,r,g,b)


void _chechked_glColor3f ( const char* file, const char* line, float r, float g, float b)
{
     _function_ptr_glColor3f(r,g,b);
     if((int err = _function_ptr_glGetError()) != GL_OK)
     {
           ShowFancyMessageBox( "gl error" + gluErrorText(err)+" in file " + file + " line " + line );
       

     
     }
}
#endif