Share display list between context

I’m using glx on debian with two screens, I’ve found that to get display lists to work I have to load in twice,once for each screen. There seems to be some glx instructions that allow sharing but I can’t find any examples of how to do this. Can anyone advise? Thanks.

Hi thanks for that, the problem is I’m unsure how to use it for DL.
In my code to init screens I loop thought each screen and use it as follows

/* get an appropriate visual */
vi = glXChooseVisual(DisplayPointer, screenlist[count]->screen, attrListDbl);

screenlist[count]->doubleBuffered = True;   /*double buffer always*/
 /* create a GLX context */
screenlist[count]->ctx = glXCreateContext(DisplayPointer, vi, 0, GL_TRUE);

What do I do to share lists? I think I replace the 0 but am unsure what with. Thanks

Quote from the spec:

“If shareList is not NULL, then all display-list indexes and
definitions are shared by context shareList and by the newly
created context.”

Basically, throw the id of the another rendering context into the third parameter, and both contexts (old and new ones) will share their list space.

Hello Zengar,
if you don’t mind me being bad mannered have you done multi screen in Linux? The reason I ask is I am totally unable to get this to work? I also found this on these forums which are the same problem

In my code I loop through the screens and load each on, I’ve tried both of the following which both give errors

if(count<1) screenlist[count]->ctx = 

glXCreateContext(DisplayPointer, vi, 0, GL_TRUE);
else screenlist[count]->ctx = glXCreateContext(DisplayPointer, vi, screenlist[0]->ctx, GL_TRUE); //make it the same as first screen

if(count<1) screenlist[count]->ctx = glXCreateContext(DisplayPointer, vi, 0, GL_TRUE);
else screenlist[count]->ctx = glXCreateContext(DisplayPointer, vi, glXGetCurrentContext( ), GL_TRUE); //try getting the current context

both give errors very similar to the one reported in the thread above.

knobby, sorry, but I can’t help you on this, as I have never programmed on Linux (and haven’t played with 3D graphics for several years now). I am more of a theoretician :slight_smile:

I have set it up several ways and worked with it on a user level, both one GPU card per display and one GPU card driving multiple displays, but not had the need to write a multi-screen GLX app.

If one GPU card driving multiple displays, consider setting up Twinview (if NVidia) because then you only have one super-wide X screen to deal with in your code. For all the details, see the NVidia README at: /usr/share/doc/NVIDIA_GLX-1.0/README.txt.

If other config, you may need to figure out your BadMatch problem.

Regardless, post any questions/problems to the Linux forum on AaronP@NVidia is pretty good about responding.

Thanks Chaps,
I’ll give the other forum a try in the next few days

BTW, setting up an X config with NVidia on Linux is really simple. For instance:

  • nvidia-xconfig --twinview
  • nvidia-xconfig --xinerama
  • nvidia-xconfig -a

Use “nvidia-xconfig -A” to see more options.

FYI, I’ve just fought this fight.

Here’s what I ended up understanding/thinking:

Indirect GLX contexts can share quite readily, but lose some GL extensions.

Direct GLX contexts can share if the server side is in the same ‘address space’ - I read this as the same graphics card - but then about 25% of the descriptions add ‘and the same X screen’.

And that’s what I found - unless the 2 contexts are on the same X screen I can’t share (textures in my case). I ended up using TwinView as recommended her. I would have preferred separate X Screens, but there you go. You also get pseudo-Xinerama support for free.

As a side note, XRandR doesn’t seem to work too well - I’m about the try the Nvidia specific APIs.


This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.