glGetError and CPU/GPU concurrency

Hi!
I have a question regarding performance and the OpenGL error state.
It is usually recommended to avoid calling glGet* functions during rendering, so the GPU pipeline doesn’t get flushed unnecessarily. However, what about glGetError()? I routinely call it at the end of larger pieces of GL code, to ensure everything’s fine.
The question is, does glGetError incur a pipeline flush? Would it be advisable to call it only in debug builds? I can think of some cases where I’d like to get notified of errors in release builds, too.
So I figured, before I go over all of my GL code and benchmark, I’d ask here first, maybe someone might share her/his experience.

Thanks in advance!

For sure on tiled renderers like GLES implementations calling glGetError should only be done in debug code. I think the last time I stripped my debug code out of an iPhone project the difference in performance was actually noticeable.

I think the same kind of thing applies for desktop GPUs, but I have never really noticed a performance impact worth worrying about. I do take it out of production code as a matter of course though.

This is the guidance I have in my Apple Dev Docs on the matter…

Any form of glGet or glIs. Getting state values slows your application. Unless your application is a “middle ware” application, you shouldn’t need to retrieve state values. During development, however, it’s quite common to call glGetError. When your application is ready to go into production, make sure that your[sic] remove glGetError calls and any other state getting and checking functions. As an alternative during development, you can look for errors by setting OpenGL Profiler to break on errors.

OK thanks! I think I’ll change the calls to a macro, then (eeek)…

In newer nvidia drivers “error reporting” can be deactivated. Has anybody any experience with this?
Will GL errors actually be ignored thus no performance loss through glGetError()?