NVidia GPU affinity example?

I’m trying to find an example that detects an Intel integrated graphics chip and chooses the NVidia card, in laptops that have both. I can only find a lot of people talking about the extension, but no actual examples of it in use.

I am trying this code to show the GPU names but I get an “unhandled exception” error on the call to wglEnumGpusNV(). This is on a GEForce 480 with the latest driver:

        //NVidia GPU affinity
        #define MAX_GPU 4
        int pf=0;
        int gpuIndex = 0;
        HGPUNV hGPU[MAX_GPU];
        HGPUNV GpuMask[MAX_GPU];
        HDC affDC;
        HGLRC affRC;
        PGPU_DEVICE lpGpuDevice = new GPU_DEVICE;

        while ((gpuIndex < MAX_GPU) && wglEnumGpusNV(gpuIndex, &hGPU[gpuIndex]))
        {
            int iDeviceIndex=0;
            while (wglEnumGpuDevicesNV(hGPU[gpuIndex],iDeviceIndex,lpGpuDevice))
            {
                printf(lpGpuDevice->DeviceName);
                printf("
");
            }
            gpuIndex++;
        }

Extension:
http://developer.download.nvidia.com/opengl/specs/WGL_nv_gpu_affinity.txt

Sorry if this is obvious, but neither your code snippet nor your post mention it, so: Have you checked if the extension if available on your context (using wglGetExtensionsString) and have you obtained the extension function pointer (using wglGetProcAddress)?

GPU Affinity is only supported on Quadro cards.

This should be the shortest standalone example: https://github.com/Eyescale/Equalizer/blob/master/tools/affinityCheck/affinityCheck.cpp

Don’t know if you have already seen this article in geeks3d:
http://www.geeks3d.com/20081014/parallel-rendering-in-opengl-how-to-use-multiple-gpus/

For forcing NVIDIA Optimus laptops to always use the NVIDIA card for your program, see the following PDF: http://developer.download.nvidia.com/devzone/devcenter/gamegraphics/files/OptimusRenderingPolicies.pdf