VK_SUBOPTIMAL_KHR is it safe to use it as window resize detection?

In latest (summer 2021+ idk version) Linux AMD Mesa drivers - Vulkan window in X11 and Wayland on resize (vkAcquireNextImageKHR) in AMD GPU return VK_SUBOPTIMAL_KHR or VK_ERROR_OUT_OF_DATE_KHR (randomly)


VK_SUBOPTIMAL_KHR A swapchain no longer matches the surface properties exactly, but can still be used to present to the surface successfully.

Is it safe/expected to use VK_SUBOPTIMAL_KHR to detect window resize? Or its unsafe?

It could be returned for reasons other than size. Changes in format, for example.

The windowing system ought to inform you when the size changes, so trying to rely on indirect mechanisms is not helpful.

The windowing system ought to inform you when the size changes, so trying to rely on indirect mechanisms is not helpful.

ofc I do process Window system event before Vulkan render. And check memory and everything before Vulkan framebuffer resize.

my question is - is it safe to recreate everything (framebuffers etc) on VK_SUBOPTIMAL_KHR, because I want to have Vulkan framebuffers size equal to window size?

can VK_SUBOPTIMAL_KHR be looped forever? (for example on scaling part of screen, and some internal optimization of Wayland or X11 will allocate something with VK_SUBOPTIMAL_KHR and put there part of my window…)

or how to do it correctly?

Technically, you don’t have to wait for a suboptimal swapchain. You can recreate the swapchain every frame if you like.

I mean, performance may suffer if you do, because you’re redoing work that you don’t need to do. But it is “safe”.

I’d say you should recreate the swapchain when the API tells you that you must, and maybe periodically if it is suboptimal. But I wouldn’t do it every frame, and definitely not in a loop.

1 Like

It is safe but not necessarily sufficient. In addition, you may or may not want to do it.

I believe that nVidia PC hardware on Windows doesn’t ever throw VK_SUBOPTIMAL_KHR so it wouldn’t be useful to detect resize there. MoltenVK throws VK_SUBOPTIMAL_KHR like crazy while the window is being resized–you could reallocate but you may want to wait until the resizing stops.

this was also reason for this question - too many hardware and OS and WM and every act differently

on AMD you can resize window and like you say “just wait” it never throw VK_ERROR_OUT_OF_DATE_KHR, but on Nvidia its always VK_ERROR_OUT_OF_DATE_KHR no matter what(and crash if you dont process this destroying/freeing/recreating your allocated data). (VK_SUBOPTIMAL_KHR happens only on X11/XCB or/and xWayland)
In Wayland resize logic completely different from other WM, and way too many platforms to support if you rly care about crossplatform. (and Wayland has too many features that I left untested, I have no idea how application will react to those WM features(exampel I was talking about “zoom” part of screen, what application do in this case how it rendered… no idea))

And because its GUI app its multithreaded obviously (OS send resize event in random time to your app UI thread(only X11/XCB can collect events and drop them to your app to single thread)), you can not “expect resize event to be first before render frame”.
So you have to “expect to stop your rendering at any time in your render loop when it resized”. (obviously I have framebuffers that have its own rendering thread)

For now im fine with only Nvidia and AMD support, everything else untested.
If my “opensource tool” actually useful - someome will bugreport its problem if there will be any, testing “how it actually works” need too much time and hardware that I do not have.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.