OpenGL with threads


I am working on an application in which, there are multiple graphics window instances. The main application (interface) runs in the main thread. When user selects new window, here the graphics window is loaded. I would like to mention it here that I am using wxGLCanvas as a graphics window. Now there is a preview window which is also derived from wxGLCanvas. In this preview window I am doing image processing, before loading that image into the main graphics window.

Now the situation is that this preview window is run from the main thread (not the one which runs graphics window). I have UnProject function in both the classes. i.e. in preview window and in graphics window. In the main GUI I display mouse pointer location in the bottom lower corner status bar. To do this I am calling unproject function of graphcis window. But when I run the application, everytime, the UnProject function from preview window is being invoked. Even without creating any instance of preview window, The unproject function of preview window class which is the protected member function of that class, is invoked. Initially, both functions had same name in both classes. I tried changing the name of the UnProject function in preview window class, but failed to solve the problem.

Though the applications runs well without any error, the x,y coordinates shown is wrong in the statusbar. Expert kindly suggest me if there is any problem with my approach of using OpenGL with multiple threads.


OpenGL is inherently single thread thread (you only have 1 graphics card) so it is easiest to only have 1 thread that handles your OpenGL. This thread does not handle the keyboard or mouse events directly. This thread handles all your rendering; so if you have multiple windows, you render them serially in this thread, selecting each contexts in turn. If you have shared resources, create a shared context that is selected whenever a mesh or texture is created. The only exception to a single thread is for loading resources into the shared context which can be done on a separate thread.

Your unproject function problem is probably being cause by having the wrong context selected when the function is called.

The main thing is that you need to bear in mind that the current OpenGL context is per-thread state, i.e. each thread has its own current context.

It is an error to attempt to make a single context current in more than one thread, so if you want to use multiple threads, you may need to create additional contexts. You definitiely need to refer to the documentation for your GUI toolkit (wxWidgets) to see if it imposes any additional restrictions on the use of threads.

Finally: GPUs typically aren’t designed to support fast context switching, so making OpenGL calls from more than one thread can hurt performance, sometimes quite badly.

Since I don’t have any experience with multi-GPU setups or CrossFire/SLI, what is the preferred way to dispatch commands in such a case? I imagine, still one context and the implementation handles load-balancing automatically? Two contexts? Linux/Mac/Windows? Thanks!

Depends on what you’re trying to do with a multi-GPU setup. For CrossFire/SLI (GPUs connected behind-the-scenes), IIRC you address these as if they’re one GPU, so one context. If you want fast, independent use of each GPU, create one GL context and screen per GPU, one thread or process per context, and go to town.

IIRC you address these as if they’re one GPU, so one context.

That’s what I was getting at. Thank you!

go to town

Hell yeah! :wink:

independent use of each GPU, create one GL context

This is not that straight forward in the MS Windows. If you have a quadro in the mix of nVidia cards there is an extension to select the gpu for a contexts. I understand with AMD the screen location determines gpu but I don’t have 2 AMD cards some I cannot confirm this. With 2 geForce cards I haven’t worked out how I can choose the gpu. I seem to get the first one found by the operating system.


…independent use of each GPU, create one GL context
This is not that straight forward in the MS Windows. …With 2 geForce cards I haven’t worked out how I can choose the gpu…[/QUOTE]
In Linux, no problem. Not sure if/how to get the same with Windows.

Dark Photon - How do you select a gpu on Linux?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.