Creating a context on a specified GPU

Hi,
I’d like to create a context on a specified GPU but I don’t know how to proceed. Can someone enlight me?

http://www.equalizergraphics.com/documentation/parallelOpenGLFAQ.html#multigpu

Ok for Linux (XOpenDisplay( “:0.x” ) where x stands for the GPU right?).
how about Windows.

NVidia (Quadro only): WGL_NV_GPU_affinity
ATI: WGL_AMD_gpu_association

@skynet:
I won’t use extensions as I want to create my context on the specified GPU before any call to OpenGL functions.

I found this for Windows:

I haven’t taken a look at it yet.

You need to use these extension if you want to create a context on a specific GPU. I know it’s weird, but here is what has to be done:

  • Create non-affinity context on the GPU you want to use
  • Query extension pointers
  • Destroy non-affinity context
  • Create affinity context using extension.

We use this approach successfully in Equalizer.

This only works on Windows and moreover only for Nvidia Quadro cards.
Isn’t it sufficient to only use the non-affinity context? Especially if I don’t want to do multi-pipe rendering.

Anyway if someone wants a code snippet for the Windows part…

By default, the GPU on Windows is assigned based on the monitor a context is created on. Setting the position and size of your window to be entirely on the desired monitor should do the trick.

By default, the GPU on Windows is assigned based on the monitor a context is created on. Setting the position and size of your window to be entirely on the desired monitor should do the trick.

That is not true. Since the diver has to expect that you move the window from one monitor another at any time (your window might even straddle two GPUs!), it will prepare to do rendering on any available GPU. That means it needs to:

  1. Create copies of all resources (VBOs, textures, FBOs etc) on all GPUs
  2. Copy changed data around (RTT comes to mind here… can become slow as hell this way)

This is certainly not the behaviour you want. When you ask the system to do rendering on one certain GPU, you expect the other GPU(s) to be completely free for other stuff. On Windows NV_gpu_affinity and AMD_gpu_association are the only way to talk to a specific GPU.

Oh, I see. Thanks for clearing that up.

@skynet> […]NV_gpu_affinity and AMD_gpu_association […]
That’s specially true for multi-pipe rendering.
But how about doing the rendering on GPU and computing on another one (Ok this is weird because Open CL or CUDA are dedicated to GPU computing), gpu_affinity/association aren’t necessary, are they?

CUDA has its own means to enumerate all CUDA-capable devices in the system. Also, CUDA is prepared to cooperate with NV_gpu_affinity (quote from the CUDA Programming Guide):

On Windows and for Quadro GPUs, cudaWGLGetDevice() can be used to retrieve the CUDA device associated to the handle returned by WGL_NV_gpu_affinity().

So, I imagine it is very easy to render on one GPU and do compuations on another.