Issue with Dual GPUs on System

Hello,

I’ve installed Vulkan SDK version 1.4.280.0 on my Ubuntu 24.04 LTS PC, which features two graphics cards. The SDK installed successfully, but it defaults to using the integrated Intel GPU. This integrated GPU only supports Vulkan version 1.3, preventing me from utilizing the current API version as described in the documentation.

Is there a solution to this problem? Any assistance would be greatly appreciated.


Relevant System Information:

vulkaninfo output (partial):

WARNING: [Loader Message] Code 0 : Layer VK_LAYER_MESA_device_select uses API version 1.3 which is older than the application specified API version of 1.4. May cause issues.

==========
VULKANINFO
==========

Vulkan Instance Version: 1.4.321

GPU0:
VkPhysicalDeviceProperties:
---------------------------
	apiVersion        = 1.3.289 (4206881)
	driverVersion     = 24.2.8 (100671496)
	vendorID          = 0x8086
	deviceID          = 0x3e9b
	deviceType        = PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU
	deviceName        = Intel(R) UHD Graphics 630 (CFL GT2)
	pipelineCacheUUID = 7f87bb17-5c16-2b2b-9454-da49d2061db8

GPU1:
VkPhysicalDeviceProperties:
---------------------------
	apiVersion        = 1.4.303 (4210991)
	driverVersion     = 575.64.3.0 (2412773568)
	vendorID          = 0x10de
	deviceID          = 0x1f95
	deviceType        = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
	deviceName        = NVIDIA GeForce GTX 1650 Ti
	pipelineCacheUUID = 72f9eb38-c855-48f3-b993-5a0c1fff47ac

nvidia-smi output:

Wed Jul 23 14:21:15 2025       
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 575.64.03              Driver Version: 575.64.03      CUDA Version: 12.9     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce GTX 1650 Ti     Off |   00000000:01:00.0 Off |                  N/A |
| N/A   47C    P8              4W /   50W |       5MiB /   4096MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
                                                                                         
+-----------------------------------------------------------------------------------------+
| Processes:                                                                              |
|  GPU   GI   CI              PID   Type   Process name                        GPU Memory |
|        ID   ID                                                               Usage      |
|=========================================================================================|
|    0   N/A  N/A            1771      G   /usr/lib/xorg/Xorg                        4MiB |
+-----------------------------------------------------------------------------------------+

echo $VULKAN_SDK output:

/home/shutruk/vulkan/1.4.321.1/x86_64

I don’t understand what you mean by this. Vulkan applications have full control over which GPU in the system they use (by selecting the corresponding VkPhysicalDevice). The SDK can only influence the order in which GPUs are enumerated to applications (and possibly filter out some GPUs) - I believe the Vulkan Loader which is part of the SDK handles the enumeration.

The vulkaninfo output you posted shows that the tool is aware of both GPUs in your system.

Which application chooses the Intel GPU over your GTX1650, i.e. is it one that you are developing or a third party app? In the former case, you should take a look at how you select the VkPhysicalDevice to use and probably add preference for *_TYPE_DISCRETE_GPU.
For a third party app the SDK docs describe the environment variable VK_DRIVER_FILES which can be used to limit which drivers the loader is aware of - I don’t know where the .json file for the nvidia driver is typically installed on a linux system, but that is where the env variable should point to only use that driver.

Errors occur during development. The following code shows the errors:

#include <algorithm>
#include <iostream>
#include <stdexcept>
#include <cstdlib>
#include <memory>

#ifdef __INTELLISENSE__
#include <vulkan/vulkan_raii.hpp>
#else
#include <vulkan/vulkan.hpp>
#endif

#include <vulkan/vk_platform.h>

#define GLFW_INCLUDE_VULKAN // REQUIRED only for GLFW CreateWindowSurface.
#include <GLFW/glfw3.h>

constexpr uint32_t WIDTH = 800;
constexpr uint32_t HEIGHT = 600;

class HelloTriangleApplication
{
public:
    void run()
    {
        initWindow();
        initVulkan();
        mainLoop();
        cleanup();
    }

private:
    GLFWwindow *window = nullptr;

    vk::raii::Context context;
    vk::raii::Instance instance = nullptr;

    void initWindow()
    {
        glfwInit();

        glfwWindowHint(GLFW_CLIENT_API, GLFW_NO_API);
        glfwWindowHint(GLFW_RESIZABLE, GLFW_FALSE);

        window = glfwCreateWindow(WIDTH, HEIGHT, "Vulkan", nullptr, nullptr);
    }

    void initVulkan()
    {
        createInstance();
    }

    void mainLoop()
    {
        while (!glfwWindowShouldClose(window))
        {
            glfwPollEvents();
        }
    }

    void cleanup()
    {
        glfwDestroyWindow(window);

        glfwTerminate();
    }

    void createInstance()
    {
        constexpr vk::ApplicationInfo appInfo{.pApplicationName = "Hello Triangle",
                                              .applicationVersion = VK_MAKE_VERSION(1, 0, 0),
                                              .pEngineName = "No Engine",
                                              .engineVersion = VK_MAKE_VERSION(1, 0, 0),
                                              .apiVersion = vk::ApiVersion14};

        // Get the required instance extensions from GLFW.
        uint32_t glfwExtensionCount = 0;
        auto glfwExtensions = glfwGetRequiredInstanceExtensions(&glfwExtensionCount);

        // Check if the required GLFW extensions are supported by the Vulkan implementation.
        auto extensionProperties = context.enumerateInstanceExtensionProperties();
        for (uint32_t i = 0; i < glfwExtensionCount; ++i)
        {
            if (std::ranges::none_of(extensionProperties,
                                     [glfwExtension = glfwExtensions[i]](auto const &extensionProperty)
                                     { return strcmp(extensionProperty.extensionName, glfwExtension) == 0; }))
            {
                throw std::runtime_error("Required GLFW extension not supported: " + std::string(glfwExtensions[i]));
            }
        }

        vk::InstanceCreateInfo createInfo{
            .pApplicationInfo = &appInfo,
            .enabledExtensionCount = glfwExtensionCount,
            .ppEnabledExtensionNames = glfwExtensions};
        instance = vk::raii::Instance(context, createInfo);
    }
};

int main()
{
    try
    {
        HelloTriangleApplication app;
        app.run();
    }
    catch (const std::exception &e)
    {
        std::cerr << e.what() << std::endl;
        return EXIT_FAILURE;
    }

    return EXIT_SUCCESS;
}

The following errors:

Pointer cannot be used with non-aggregate type “vk::ApplicationInfo”. C/C++

Namespace “vk” has no member “ApiVersion14” C/C++(135)

And the following errors when trying to build:

$ g++ main.cpp -o main -lglfw -lvulkan
main.cpp:35:9: error: ‘raii’ in namespace ‘vk’ does not name a type
   35 |     vk::raii::Context context;
      |         ^~~~
main.cpp:36:9: error: ‘raii’ in namespace ‘vk’ does not name a type
   36 |     vk::raii::Instance instance = nullptr;
      |         ^~~~
main.cpp: In member function ‘void HelloTriangleApplication::createInstance()’:
main.cpp:74:65: error: ‘ApiVersion14’ is not a member of ‘vk’; did you mean ‘ApiVersion13’?
   74 |                                               .apiVersion = vk::ApiVersion14};
      |                                                                 ^~~~~~~~~~~~
      |                                                                 ApiVersion13
main.cpp:74:77: error: designated initializers cannot be used with a non-aggregate type ‘vk::ApplicationInfo’
   74 |                                               .apiVersion = vk::ApiVersion14};
      |                                                                             ^
main.cpp:74:77: error: no matching function for call to ‘vk::ApplicationInfo::ApplicationInfo(<brace-enclosed initializer list>)’
main.cpp:74:77: error: designated initializers cannot be used with a non-aggregate type ‘const vk::ApplicationInfo’
main.cpp:81:36: error: ‘context’ was not declared in this scope
   81 |         auto extensionProperties = context.enumerateInstanceExtensionProperties();
      |                                    ^~~~~~~
main.cpp:84:22: error: ‘std::ranges’ has not been declared
   84 |             if (std::ranges::none_of(extensionProperties,
      |                      ^~~~~~
main.cpp:95:54: error: designated initializers cannot be used with a non-aggregate type ‘vk::InstanceCreateInfo’
   95 |             .ppEnabledExtensionNames = glfwExtensions};
      |                                                      ^
main.cpp:95:54: error: no matching function for call to ‘vk::InstanceCreateInfo::InstanceCreateInfo(<brace-enclosed initializer list>)’
main.cpp:96:9: error: ‘instance’ was not declared in this scope; did you mean ‘VkInstance’?
   96 |         instance = vk::raii::Instance(context, createInfo);
      |         ^~~~~~~~
      |         VkInstance
main.cpp:96:24: error: ‘vk::raii’ has not been declared
   96 |         instance = vk::raii::Instance(context, createInfo);

Could you please advise where exactly I need to specify the parameters you mentioned?

Sorry, you started by asking about GPU selection which I thought I had something relevant to say about. Now you have compiler errors when using the vulkan hpp wrappers - I’m not familiar with those, so I don’t know what the problem may be.

That being said…

… this looks highly suspicious. Why are you wanting to make use of the RAII enabled wrappers when __INTELLISENSE__ is defined and not otherwise? AFAIK Intellisense is the name of the code inspection/analysis technology in Visual Studio - but you are on a linux system using g++ as compiler; so that define is probably never set for you. And that would explain why you are getting errors about the namespace vk::raii (or types therein) not making sense to the compiler - it hasn’t seen any declarations in that namespace, because you are not including the header vulkan_raii.hpp when the compiler runs (because __INTELLISENSE__ is not defined in that case).

I use multiple devices in my stack.

Here’s how I do it: long story short, when the bootloader runs in my game engine, I call some black magic that generates a weighted score of the device’s performance and specs – *see below. It then uses the best device as the first preference and offloads extra work to subsequent devices in parallel. WinThreads headers work nicely for swapping memory in this process.

voodoo code

bool VulkanDevice::is_device_suitable(VkPhysicalDevice device) {
    return rate_device_suitability(device) > 0;
}

int VulkanDevice::rate_device_suitability(VkPhysicalDevice device) {
    VkPhysicalDeviceProperties device_properties;
    VkPhysicalDeviceFeatures device_features;

    vkGetPhysicalDeviceProperties_(device, &device_properties);
    vkGetPhysicalDeviceFeatures_(device, &device_features);

    int score = 0;

    // Check for essential queue families first - reject if missing
    uint32_t queue_family_count = 0;
    vkGetPhysicalDeviceQueueFamilyProperties_(device, &queue_family_count, nullptr);

    std::vector<VkQueueFamilyProperties> queue_families(queue_family_count);
    vkGetPhysicalDeviceQueueFamilyProperties_(device, &queue_family_count, queue_families.data());

    // Must have graphics queue family
    bool has_graphics_queue = false;
    for (const auto& queue_family : queue_families) {
        if (queue_family.queueFlags & VK_QUEUE_GRAPHICS_BIT) {
            has_graphics_queue = true;
            break;
        }
    }

    if (!has_graphics_queue) {
        return 0; // Device is not suitable
    }

    // Base score for being suitable
    score = 100;

    // Heavily prefer discrete GPUs (like RTX 3070 Ti) over integrated (like Intel UHD)
    if (device_properties.deviceType == VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU) {
        score += 10000; // Major boost for dedicated GPUs
    } else if (device_properties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU) {
        score += 1000;  // Lower score for integrated GPUs
    } else if (device_properties.deviceType == VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU) {
        score += 500;   // Even lower for virtual GPUs
    }

    // Boost score based on memory size (discrete GPUs typically have more VRAM)
    VkPhysicalDeviceMemoryProperties memory_properties;
    vkGetPhysicalDeviceMemoryProperties_(device, &memory_properties);

    VkDeviceSize total_memory = 0;
    for (uint32_t i = 0; i < memory_properties.memoryHeapCount; i++) {
        if (memory_properties.memoryHeaps[i].flags & VK_MEMORY_HEAP_DEVICE_LOCAL_BIT) {
            total_memory += memory_properties.memoryHeaps[i].size;
        }
    }

    // Add points based on GB of VRAM (RTX 3070 Ti has 8GB, Intel UHD shares system RAM)
    score += static_cast<int>(total_memory / (1024 * 1024 * 1024)); // +1 point per GB

    // Boost score for common gaming GPU vendors
    std::string device_name = device_properties.deviceName;
    if (device_name.find("RTX") != std::string::npos ||
        device_name.find("GTX") != std::string::npos ||
        device_name.find("GeForce") != std::string::npos) {
        score += 5000; // NVIDIA gaming GPUs get major boost
    } else if (device_name.find("Radeon") != std::string::npos ||
               device_name.find("RX") != std::string::npos) {
        score += 4000; // AMD gaming GPUs get boost
    } else if (device_name.find("Intel") != std::string::npos &&
               device_name.find("UHD") != std::string::npos) {
        score += 500;  // Intel integrated gets minimal boost
    }

    // Boost score for compute/graphics capabilities
    if (device_features.geometryShader) score += 100;
    if (device_features.tessellationShader) score += 100;
    if (device_features.multiViewport) score += 50;
    if (device_features.fillModeNonSolid) score += 50;

    return score;
}

VkPhysicalDevice VulkanDevice::select_gpu_automatically(const std::vector<VkPhysicalDevice>& devices) {
    VkPhysicalDevice best_device = VK_NULL_HANDLE;
    int best_score = -1;

    LOG("[VulkanDevice] Automatic GPU selection (ranking all devices):");

    for (const auto& device : devices) {
        int score = rate_device_suitability(device);

        VkPhysicalDeviceProperties device_properties;
        vkGetPhysicalDeviceProperties_(device, &device_properties);

        const char* gpu_type = "Other";
        if (device_properties.deviceType == VK_PHYSICAL_DEVICE_TYPE_DISCRETE_GPU) {
            gpu_type = "Discrete GPU";
        } else if (device_properties.deviceType == VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU) {
            gpu_type = "Integrated GPU";
        } else if (device_properties.deviceType == VK_PHYSICAL_DEVICE_TYPE_VIRTUAL_GPU) {
            gpu_type = "Virtual GPU";
        }

        (void)gpu_type;

        LOG("[VulkanDevice] Device: ", device_properties.deviceName, " (Score: ", score, ", ", gpu_type, ")");

        if (score > best_score) {
            best_score = score;
            best_device = device;
        }
    }

    if (best_device == VK_NULL_HANDLE) {
        ERROR("[VulkanDevice] ERROR: No suitable GPU found for automatic selection!", ErrorType::Graphics);
        // TODO: Show GUI error dialog in future
    }

    return best_device;
}

VkPhysicalDevice VulkanDevice::select_gpu_manually(const std::vector<VkPhysicalDevice>& devices, const std::string& device_index) {
    LOG("[VulkanDevice] Manual GPU selection by index: '", device_index, "'");

    // Parse device index
    size_t index;
    try {
        index = std::stoul(device_index);
    } catch (const std::exception&) {
        ERROR("[VulkanDevice] ERROR: Invalid device index. Must be a number (0, 1, 2, etc.)", ErrorType::Graphics);
        // TODO: Show GUI error dialog in future
        return VK_NULL_HANDLE;
    }

    // Check bounds
    if (index >= devices.size()) {
        ERROR("[VulkanDevice] ERROR: Device index out of bounds", ErrorType::Graphics);
        // TODO: Show GUI error dialog in future
        return VK_NULL_HANDLE;
    }

    VkPhysicalDevice device = devices[index];

    // Verify the selected device is suitable
    int score = rate_device_suitability(device);
    if (score <= 0) {
        VkPhysicalDeviceProperties device_properties;
        vkGetPhysicalDeviceProperties_(device, &device_properties);

        ERROR("[VulkanDevice] ERROR: Selected device is not suitable for rendering!", ErrorType::Graphics);
        // TODO: Show GUI error dialog in future
        return VK_NULL_HANDLE;
    }

    VkPhysicalDeviceProperties device_properties;
    vkGetPhysicalDeviceProperties_(device, &device_properties);
    LOG("[VulkanDevice] Selected device ", index, ": ", device_properties.deviceName, " (Score: ", score, ")");

    return device;
}

good luck!

Does this source code allow all uses? Must give attribution to you?

Is possible to replace the hardcoded dedicated versus integrated versus virtual test (which accounts for >99% of the score) with: tests for supported Vulkan versions, plus include clock_hertz * cores into the score? Reason: some new integrated GPUs (such as Apple M2 Ultra which has 76 GPU cores, for a total of 27 teraFLOPS) have more resources (both RAM + clock + cores) than some old discrete GPUs (such as NVIDIA GeForce GTX 1630 or AMD Radeon RX 6300 from 2022, which both have less than 1000 gigaFLOPS). Computers with huge integrated GPU compute could: include old discrete GPUs with no purpose except to allow to attach more visual outputs (for such systems, is best if integrated GPU is how Vulkan executes).

Very good points, and yes you can replace that with a better calc as you pointed out. Good improvements, and added issues on my end to fix in future. Ty.

Feel free to use that code as you will. Attribution is appreciate but not required. :slight_smile:

Feel free to send back improvements to that algorithm, especially with the new ultra m4 mega core cpus, drop in here.

Happy it pointed you In the right direction.

_p3n