Is there any way to make a rendering context current without binding it to a specific drawable? The GLX spec doesn’t mention a way to do this although it more or less says it should be possible(by creating a drawable/window calling glXMakeCurrent and then destroying the window/drawable). That’s what I do now but it is lame because:
1)Its a hack
2)I don’t know whether this extra window will eat up any resources(memory, etc.) although it’s beeing destroyed.
So the question is: is there an official way to do this, which escapes me? If not is there a better unofficial way to do this?
[This message has been edited by zen (edited 12-29-2003).]
The resources associated with the drawable won’t be released until it is unbound from its context. What use would a context be without a drawable?
According to the glx 1.2 spec, when a bound ,current drawable is destroyed the rendering context remains current and all state is updated but no drawing is done. So I assume that the drawable does get unbound and all (drawable) resources (i.e buffers etc. but not say textures) get freed.
I wabt to be able to upload textures even before creating a window(before windowing system initialization that is). It’s just a matter of overall design. Not terribly important but on the other hand it should be easy to do.
My guess is that this is an area where you are likely to find lots of driver bugs. I’ve been doing this stuff for a long time, and I’ve never heard of anyone wanting to do that (not that there’s anything wrong with it per se). I can imagine that other driver writers are probably in the same boat.
Let me know if it works.
It seems to work fine with the method described above but I don’t much like it as it’s a hack for something that should normally be officially possible (with something like glXMakeCurrent(dpy, None, ctx)).
this hack’s a very bad idea.
try to re-design that part of your software
if possible. If you really need to use
the context without drawable, use a small
dymmy window like 32x32 (without mapping it on the screen, of course), and don’t
destroy it until the context is current
(don’t break the rules). Otherwise you may
never know what driver bug you may come
across. And don’t be a paranoid, one small
window won’t eat up the
window_system/opengl resources. It will
make no difference at all. An 32-bpp 32x32
doublebuffer window with 32-bit
depth+stencil will make about 12KB.
Let’s double it, just in case, and round it
up to power of two, and it makes 64KB -
which, of course, is nothing compared to
whatever amount video memory you have.
I changed the hack to making the context current using the DeafaultRootWindow(dpy). That was also dangerous as the root window might not be similar(in the glx spec way) to the context and that would create a BadMatch error. Well the design changed (not because of this) anyway so now I can create a dummy window until the actual window is created.
The previous hack did not break the rules, it was mentioned in the specs but it was pretty ugly, even for a hack. Also I’m not beeing paranoid about wasting resources but when you work with huge, complex pieces of software like X you can’t make any assumptions about their internals if it’s not mention in the specs. Do you know how the memory for the window buffers is allocated? They may be performing ‘optimizations’ that result in more memory beeing allocated than whbpp. Admittedly it’s not probable but you never know…
Yes, it does not break the formal rules of the GLX spec. I meant that it’s generally risky for one to rely on such extreme case rules. There’s no telling if the driver writers bother to implement so carefully the spec. Or even if they do so, (as they ought to) the possibility for bugs in such areas is very high, because of the little testing (since they are rarely used in the practice).
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.