openGL with multiple graphics card

I have an OpenGL application that renders in two widgets using two different threads (“A” and “B”) sharing the same OpenGL context. So far this app runs on a Linux system with a single graphics card. I understand that when thread “A” renders in the first widget, thread “B” has to wait intil “A” finishes its work.
I was thinking about speeding up the app by using a system with two graphics cards (PCI Express) and two monitors. For simplicity, each card drives its own monitor. How would I initialize GL context in this case ? Is there a sample code I could use?
I use Linux 2.6/X11.

i’ve never done this, so i cannot give you a solution, only some hints how to start:

with 2 cards, i guess you have to run 2 x-servers. if you use Xlib, you have to connect to both servers like this:

Display *display_one = XOpenDisplay("localhost:0.0");

Display *display_two = XOpenDisplay("localhost:1.0"); 

i’m not absolutely sure if the second x-server is adressed with localhost:1.0 or localhost:0.1, but you can find out easily by checking if the variable’s value is NULL.

when you have 2 display variables, you can create a window on the graphics card you want:

 Window window_one = XCreateWindow(display_one, ...);
 Window window_two = XCreateWindow(display_two, ...); 

i recommend you try with xlib first, and if it works, we can go on using motif :wink:

I can remember a ‘similar’ question months ago. Try to make a search (don’t remember if it were on the linux forum, the beginner one or here).

Why do you need the same context for both your rendering threads ?

As RigidBody said use 2 displays (0.0 and 1.0).

Thanks guys for the prompt replies.
With one graphics card I share the context between threads, but with graphics cards I don’t have to share the context.
XOpenDisplay() will solve my problems, at least, let me move to the next problem :slight_smile: -

We have got a system with two graphics cards (Nvidia quadro 4000) and want to display two independent graphics window from single application. How do we do using opengl calls.

At any given time, there is only one active OpenGL context in a thread.
On Windows, you have to call wglMakeCurrent to switch contexts for the calling thread.
The X equivalent is glxMakeCurrent.

On windows, a context can only be made current by one thread at a time, so the thread using the context must call wglMakeCurrent( NULL, NULL ) before another thread can make the context current.
On Linux, as far as I can remember, you can have two threads binding the same context at the same time but that would be a bad idea anyway (at least you’d need some kind of mutex synchronization before actually calling any gl commands or the current OpenGL state would be unknown).

There are a number of way you can do this, but one way that parallelizes very well (on Linux) is:

  1. Set up the GPUs to each provide their own X screen (e.g. :0.0 and :0.1)
  2. Create two different threads (or processes) in your application (call these thread 0 and thread 1)
  3. In each thread, create an X window and a GLX context.
  4. Enjoy parallel OpenGL rendering via both GPUs (each to its own window/context/GPU).

There are other ways to set up multiple GPUs using the NVidia drivers. Check out:
/usr/share/doc/NVIDIA_GLX-1.0/README.txt for more details.

And use nvidia-xconfig or nvidia-settings to have their tools do this configuration for you.

I have a similar question but after researching I could find this:

Not sure why NVIDIA does not provide one for XWindows.

I still hope for a unified context management system coming soon.

[QUOTE=Janika;1238773]I have a similar question but after researching I could find this:

Not sure why NVIDIA does not provide one for XWindows.[/QUOTE]

See pg. 6 here:

On Linux, it’s already supported through existing X window system mechanisms (as I was describing). MSWin’s window system needs help here.