i have a problem with my opengl program. I use pbuffers (depending on the user-interaction also a few of them) for offscreen rendering. I had a very “simple” idea: Make Pbuffer creation fail-save even if there is a BadAlloc, i.e. the application should still be usable after a BadAlloc (in glxCreatePbuffer).
The first thing I did was to create a function for the error handler:
In case a BadAlloc occurs, I set a global variable which I check after the glXCreatePbuffer call. The idea is, to handle the bad allocs this way. This works as expected - as long as there are enough resources I can create a Pbuffer and if all resources are used I get the notification by checking the global variable.
<u>After </u> a BadAlloc error I get in trouble if I create a new glContext (the creation works fine) and try to make this new context current:
The creation works fine (At least I could not find a X or GL Error). But the process does not return after the glxMakeCurrent call. There is no SegFault or anything like that, the application just “waits”. I took a deeper look on what’s going on with a Debugger/strace and found a number of getPid() calls.
#0 0x0000002aab8cdab9 in getpid () from /lib64/tls/libc.so.6
#1 0x0000002aa9027690 in glXChannelRectSyncSGIX () from /usr/lib64/libGL.so.1
#2 0x0000002aa9026539 in glXChannelRectSyncSGIX () from /usr/lib64/libGL.so.1
#3 0x0000002aa902e949 in glXChannelRectSyncSGIX () from /usr/lib64/libGL.so.1
#4 0x0000002aa901f518 in ?? () from /usr/lib64/libGL.so.1
#5 0x0000002aa904d88f in _nv000004gl () from /usr/lib64/libGL.so.1
#6 0x0000002aa904d35b in _nv000004gl () from /usr/lib64/libGL.so.1
#7 0x0000002aaca0214c in _nv000096gl () from /usr/lib64/libGLcore.so.1
#8 0x0000002aac9b9419 in _nv000111gl () from /usr/lib64/libGLcore.so.1
#9 0x0000002aac9f9fab in _nv000096gl () from /usr/lib64/libGLcore.so.1
#10 0x0000002aacae3d1b in _nv000049gl () from /usr/lib64/libGLcore.so.1
#11 0x0000002aacae432a in _nv000049gl () from /usr/lib64/libGLcore.so.1
#12 0x0000002aacae46df in _nv000049gl () from /usr/lib64/libGLcore.so.1
#13 0x0000002aac9bb6fb in _nv000106gl () from /usr/lib64/libGLcore.so.1
#14 0x0000002aa902820f in glXChannelRectSyncSGIX () from /usr/lib64/libGL.so.1
#15 0x0000002aa902bad6 in glXChannelRectSyncSGIX () from /usr/lib64/libGL.so.1
#16 0x0000002aa9025ea2 in glXMakeCurrent () from /usr/lib64/libGL.so.1
If I start my application a second time (while the other application still waits) then the first application stops waiting and continues running as expected. I used gDebugger to find out which glCommand causes this, it’s a glXCreatePBuffer command (same Display). This glXcreatePBuffer command also fails (there are still all resources used) - but thats okay, the second application acts as expected.
So what I’ve done to solve this problem:
I took a new version of mesa to get an idea of what is going on. Anyway with the mesa-lib the program works fine - there was no resource conflict, so I did not even run in the error handling code. I checked my code with gDEBugger but was not able to find a single gl-error.
I can only guess what is going wrong in my program. I guess that the BadAlloc call (glXCreatePBuffer) does not “set free/unlock” something (some resource) that is locked inside the glxCreatePBuffer call. But I have no idea what this resource may be. The mesa-code uses a few Lock/UnlockDisplay function - maybe it is something like this, but I am not sure. A new glXCreatePBuffer call (even in a different application) “unlocks” this resource and allows the process to continue.
I use a Linux 64Bit system:
Red Hat Enterprise Linux WS release 3 (Taroon Update 4)
and a nvida gfx-card.
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: Quadro FX 1000/AGP/SSE2
OpenGL version string: 2.1.0 NVIDIA 96.31
Any suggestion, link, help, hint is very welcome.
Thank you all