Greetings, I hoped you could give me some advice on how to design this in the best possible way:
I have a Vulkan application that creates the VkPhysicalDevice early in the program. Later when a window is created it creates a VkSurface, then a VkDevice with a queue family that has graphics and can present to this surface (similar to VulkanTutorial). Later the VkDevice serves as input to vkCreateShaderModule.
Now it is planned to create all of the shaders at once early in the program, after the VkPhysicalDevice has been created but before a window is opened. That means I would need a VkDevice at a point where I don’t have all the queue info so I can call vkCreateShaderModule.
What do you think is the best way to handle this? Create VkDevice on the fly only with graphics capable queue, compile shader, delete VkDevice? Would the shader module survive the destruction of the VkDevice? Or two VkDevices, one for shader compilation, one for presentation?
But OpenGL can compile shaders without a window, right? So there should be a corresponding solution with Vulkan, right?
What if I create one VkDevice early in the program only with graphics capable queue family, then once the VkSurface is created, another one with graphics and presentation available queues? Would the second VkDevice be able to use shader objects from the first VkDevice?
The alternative would be of course delay shader module creation until the window / VkSurface / VkDevice is present and only do the rest at the beginning.
To compile and link shaders in GL “using” GL, requires a created, bound OpenGL context. To create and bind one, you typically need a drawable (window, pbuffer, or pixmap).
However, specific platforms and drivers may offer the ability to create and bind a context without a drawable. In the GL spec or extensions (e.g. WGL, GLX), this is referred to as “no default framebuffer” or “without a default framebuffer”. In EGL-land for instance, there’s EGL_KHR_surfaceless_context. NVIDIA’s Linux drivers for example make use of this with EGL Eye: OpenGL Visualization without an X Server, allowing you to create+bind a context without the graphical display manager (X Server) even running, much less providing windows to applications.
Let’s hope. If not now, then eventually.
One thing to keep in mind however is that comparing GL shader compile and Vulkan shader compile is a bit apples-and-oranges. GL shader compile+link likely just compiles the high-level language (GLSL) down to IR (i.e. cross-GPU portable assembly, either vendor assembly or SPIR-V, if you didn’t just provide a shader binary or SPIR-V directly). Vulkan I believe pairs the shader IR with the “draw state” + GPU to be able to take it beyond that IR stage (SPIR-V) to the ISA that actually runs on a specific GPU (think of this as a 2nd-level compile/assembly process that has to happen before you’ve got a GPU runnable shader image). This so everything can be pre-baked into the shader and it’s in a ready-to-render state. GL drivers (OTOH) typically just wait until draw time to do this IR->ISA gen, to the detriment of your application’s real-time rendering performance. The more shaders needing ISA gen, the bigger the frame time spike.
Now that Vulkan has so much dynamic state, and particularly with the recent introduction of VK_EXT_shader_object, it’ll be interesting to see where this all goes as far as Vulkan shader compilation is concerned.
At some point, there were a number of vendor-specific extensions that allowed you to create windowless OpenGL contexts. And I think EGL context creation lets you do it. But I’m not aware of this being generally available functionality for desktop GL.
While a Vulkan device is like an OpenGL context, there are some fundamental differences as well. Of particular importance is the way that they treat extensions.
In OpenGL, extensions are something the driver provides to you. They exist and have effects whether you like it or not. In Vulkan, this is not the case. A physical device offers some extensions, and you must explicitly choose which ones to use at device creation time. Vulkan extensions (and their effects) are therefore more intimately tied to a specific device.
This means that different devices created from the same physical device are not necessarily comparable the way they are in OpenGL. So objects created on one device that use an extension would be inherently incompatible with a different device that doesn’t activate that extension.
That having been said, there ain’t no rule that says you can’t create a swapchain and surface after you create your device. Vulkan devices are inherently headless; giving them a window is an after-the-fact modification.
Windows are used to create surfaces from an instance, and a surface is used to create a device’s swapchain. But nothing says that you have to do this immediately after device creation.