How to create WGL context for specific device

When creating an OpenGL context with WGL on a multi-GPU system, Windows decides what GPU to create the context for. I know that in the Windows graphics settings, an end user can change “Let Windows decide” to a specific GPU to affect context creation. But can I create a context for a different GPU than the default one solely in my application? If Windows has this functionality in the UI, there must be some hidden functions in the Windows system DLLs that can achieve this behavior.

On Linux it is very straight-forward to create an OpenGL context for a certain device using EGL_EXT_platform_device. However, on Windows with WGL, things are not that straight-forward. There have been multiple mentions online of people that managed to manipulate WGL to return OpenGL contexts for other devices than the default one on multi-GPU systems, the most prominent one being the post by user @l_belev on this forum: “How to use OpenGL with a device chosen by you” :

I tried pretty much all of these “hacks” found online on a hybrid Intel + NVIDIA GPU laptop, but CreateDCA always returns a nullptr for me when I try to create a WGL context for anything other than \.\DISPLAY1 (i.e., for the default GPU). Some of these reports found online are pretty old, so it is not unthinkable that this just no longer works on Windows 11 systems.

So my question here is: Is there anyone on this forum who ever managed to get this behavior for programmatically creating a WGL context for different GPUs in a multi-GPU system to actually work, and if yes, how?

Yes. Been doing it for decades, and do it all the time. … with NVIDIA drivers and GPUs. On Linux and Windows.

On Linux, it’s even easier than that. Just drive separate X desktop screens (monitors) with separate GPUs, and open your GLX/GL window on the correct screen. It just works (at least it did last time I tried it). With NVIDIA GeForce and Quadro GPUs and drivers. No tricks required. No need for EGL.

Yes, exactly. On Windows (sigh…)

With pure WGL/GL, you need to use vendor-specific extensions to target GL rendering to a specific GPU. On NVIDIA drivers, that’s WGL_NV_gpu_affinity. NVIDIA doesn’t expose this extension on GeForce GPUs; only on Quadro GPUs (…or whatever they’re calling their professional line these days. They’ve muddied the waters).

Here are the list of GL driver reports for that NVIDIA Windows-specific GL extension I mentioned:

@Dark_Photon But did you also get something like this to work for GPUs from different vendors in WGL? Like what I mentioned in my post, when the system for example has an iGPU from Intel and a dGPU from NVIDIA? The “How to use OpenGL with a device chosen by you” post I mentioned above seemed to have gotten this to work with “CreateDCA” calls, but I was not able to replicate this due to “CreateDCA” returning a nullptr for anything other than “\.\DISPLAY1”.

No. But I didn’t spend any time on that either. So I’m unfamiliar with if/how that would be done using pure WGL+Win GDI.

I haven’t been down that rabbit hole for ~5 years. But it’s not much fun. You end up being concerned about issues like Optimus, MUXes, iGPU passthrough, selected GPU, etc. Basically, which GPU renders your image vs. which GPU is actually scanning-out the image on the video output port (they’re not necessarily the same). Depending on the laptop and the mode, sometimes you can render directly on the NVIDIA and have the NVIDIA display the output. Or, the NVIDIA is just a dumb offscreen render GPU and the resulting image is forceably passed through the Intel GPU for display (because it owns the video output). Some laptops don’t have the hardware needed to support the former, and you’re forced to use the latter.

To your point about creating a context on a specific GPU (…on a laptop), laptops typically only have one high-performance GPU, so the Windows default GPU selection mechanism is usually fine. You just have to make sure “Prefer high performance GPU” is selected.

Oh, this one?

I’ll update your post above with a link to this for other readers.

This may fall under the hacks you’ve already seen/tried, but on Optimus systems there’s the “magic symbol” method that the driver is looking for in application binaries:

extern "C" {
   _declspec(dllexport) DWORD NvOptimusEnablement = 0x00000001;
}

See OptimusRenderingPolicies.pdf for details.
There is an equivalent for AMD devices:

extern "C" {
     _declspec(dllexport) DWORD AmdPowerXpressRequestHighPerformance = 0x00000001;
}

See this GPUOpen Page.

@Dark_Photon: Yes, that’s the post I meant.

@carsten_neumann: That sounds like it could be what solves my problem. Unfortunately I don’t have access to the dual GPU laptop until Friday, but I will update this thread then with what my results were.

In theory, I would have still prefered something more general like what is described in “How to use OpenGL with a device chosen by you”. This would enable users to select a device by name in the UI of the application by querying all display adapters using the WinAPI call EnumDisplayDevices. But I guess NvOptimusEnablement and AmdPowerXpressRequestHighPerformance would already cover most realistic dual GPU cases (at least where something like WGL_NV_gpu_affinity wouldn’t be required anyways).

I can confirm now that NvOptimusEnablement worked perfectly on a dual GPU Intel iGPU + NVIDIA dGPU laptop. But if someone ever stumbles over this thread who got the CreateDCA trick to work, then I’d of course still be interested to hear about it.

Good to hear you found a solution.

Ok well just FYI, that’s not how it works in Windows.

\.\DISPLAY1 is a Display Adapter. From the name, you’d “think” that this is a GPU. But it’s not. Think of it like a Screen in X windows on Linux.

Generally speaking, think of it like this:

  • GPU
    • A physical GPU on your system.
  • Screen
    • Region of the desktop. Driven by a GPU.
    • Windoze calls this a “Display Adapter” for some reason (e.g. '\\.\DISPLAY1', '\\.\DISPLAY2, etc.)
    • Is more likely a video output on a GPU.
  • Monitor
    • Physical display device receiving a video output from a GPU.
    • Often 1-to-1 with Screen/Display Adapter (e.g. \\.\DISPLAY1\Monitor0)

In general, the mapping between GPUs and Screens (Display Devices) is 1-to-many (e.g. '\\.\DISPLAY1', '\\.\DISPLAY2 may be driven by the same GPU in a multi-GPU system).

Windows lets you query Display Adapters (EnumDisplayDevices()) and Monitors (EnumDisplayMonitors()). And you can create DCs targeting a specific Display Adapter. But given the 1-to-many mapping, that doesn’t necessarily let you address all GPUs. Typically, you end up with a GL context rendering on the GPU backing the PRIMARY Display Adapter.

AFAIK, Windows does not let you enumerate GPUs, and the mapping of GPUs to Display Adapters. For that, you need something like WGL_NV_gpu_affinity. This lets you query the GPUs, the mapping between GPUs and Screens (Display Adapters), as well as to create DCs targetting a “specific” GPU (affinity DCs; see wglCreateAffinityDCNV()). With this, you can create your your GL context targeting the specific GPU behind that affinity DC (via the usual create GL context calls: wglCreateContext() / wglCreateContextAttribsARB()) and know for a fact that the GL resources you create and use and rendering commands you submit will be directed at that specific GPU.

Also since you’re running on a laptop with Intel and NVIDIA GPUs, it’s worth mentioning that…

In this case, even if the NVIDIA GPU is a Quadro, when you query which GPU is driving the only Screen / Display Adapter on the desktop, you’ll often find that it’s “not” the NVIDIA. This may indicate that the Intel GPU physically owns the video output, so the NVIDIA GPU cannot drive it directly. You’ll also see this kind of result if you’re querying GPUs, Display Adapters, and Monitors when logged into a Windows box over RDP.

Thanks for the clarifications. Yes, I also had noticed when querying them that there are multiple \.\DISPLAYn entries per GPU. I guess creating a DC for \.\DISPLAYn and being able to create an OpenGL context for that DC to get a context for the respective GPU was only a hack to begin with. I’m kind of curious when it broke, as it seems like people had it working some years ago. But at the end of the day, I guess the only thing that counts is that it no longer works today.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.