vkCreateShaderModule make SPIR-V code go to “compiling” inside of driver.
And this “compiling” can be “long” (seconds and minutes, depends on shader, obviously).
How I expect it to work:
vkCreateShaderModule return value, I expect it “to finish compiling” and then return value. I do not expect “other code” to be executed right after
vkCreateShaderModule before shaders actually compiled.
Question about this behavior:
Some drivers always return
vkCreateShaderModule no matter what, even when shader “still compiling”. And just instantly continue executing other code, everything works… till
I have 1 sec wait time for
vkWaitForFences, and for some reason “shader compiling time” added to
vkWaitForFences timer, so when shader compiling longer than 1 sec - I have “vkWaitForFences failed” error message from my application.
(I set 1 sec wait time obviously to not kill user OS session if something goes wrong)
Should I expect this behavior and make some “minutes long” wait timer for
vkWaitForFences after calling
vkCreateShaderModule. Or this is just driver “behavior” that should be fixed/changed?
What fences are you waiting on, and how do they relate to shader modules? Are those fences blocked on rendering commands that use pipelines built from those shader modules?
I waiting for Offscreen (framebuffer) fence before rendering Main Swapchain, Main Swapchain use(read) Ofscreen buffer.
I load “some large/complex shader” to Offscreen buffer(triangle there), and wait for it before rendering next.
(code has no Validation error, everything work perfectly as expeted, there just few driver-version with this(described in first post) weird behavior)
So this is in fact not just about shader building, but also about pipeline building.
My case - some “old driver”(or test driver, not production ready driver) has this behavior, but this behavior very consistent.
I just tested on Vulkan samples and other tutorials, I was not able to reproduce my “fence timeout” on large shader(without changing Samples code), Vulkan sameples not crash, but they not use “multiple framebuffers(offscreen)” with single framebuffer(offscreen) my code also works…
My “fence timeout”(in my code) happens only when I have 8(or more) framebuffers and large shader in first framebuffer and everything next should wait for previous framebuffer render(every next FBO also use vkWaitForFences to wait previous buffer render), but it result “fence timeout” on swapchain
vkWaitForFences where I wait for last fbo.
Also obviously if there error in
vkWaitForFences logic (my code where I wait for fences or forgot wait for something) then Validation layers will complain, but there no errors. (also loading large shader that need ~30 sec SPIR-V compile time by driver with Validation layers enabled does not generate any errors)
I saw some Vulkan examples use this code:
vkWaitForFences(device, 1, &fence, VK_TRUE, UINT64_MAX)
fenceRes = vkWaitForFences(device, 1, &renderFence, VK_TRUE, 10000);
} while (fenceRes == VK_TIMEOUT);
so look like setting longest time in vkWaitForFences is “fine”
I think this my question can be closed because it related only to this test-driver, and every “real” driver works as expected.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.