Disabling automatic XrWaitFrame delay?

If I understand correctly then the runtime may measure application render time and adjust the return time of the XrWaitFrame call such that the rendering is ready just in time. This to get the lowest latency possible.
But in our game engine we want to use all render time possible and use automatic quality control to adjust the render time such that rendering is ready just in time (but uses the full frame time). Therefore we would like to disable this automatic delay. Is that possible?

1 Like

I am new to the API, so I can only offer my early impressions. Unfortunately it doesn’t seem like there is any configuration for this.

The workaround would be to render to a separate staging image so that you are not blocked on the xrWaitSwapchainImage call to begin work rendering the frame. And then the only work being blocked is a final blit to the swapchain image.

1 Like

This sounds like a deeper pipelining, which is supported. IIRC Unreal does something like this: they start some non-pose-dependent render work in the background 2 frames ahead, using predictedDisplayTime + predictedDisplayInterval to estimate the following frame time.

The #1 job of the runtime is to make sure the headset is fed with minimal-latency images. It will generally have a finely-tuned control loop controlling when it starts its composition for each frame, as well as when it releases your application from xrWaitFrame, based on trying to get everything to arrive on time, but with the caveat that arriving a little too early (a late-latch that is not as late as possible) is very much better than a little too late (missing a frame), especially on the compositor level. Since this is a quality issue that is central to usability, you won’t find any modern runtime that lets you “disable” this, especially not an OpenXR one. (I’m not sure it would violate conformance/the spec, but nobody would do it because missing frame deadlines reflects badly on the headset and could even make the user sick)

That said as I mentioned in your other post, my impression/experience writing runtimes is that the runtime is likely to “give you” a little more time than they expect you to use. That way, if you use a little bit more time this frame, you can still make the target. So, that would be one way to gradually increase your render complexity. You can also explicitly say “it’s too late for me to get a frame done by the time I got from xrWaitFrame” by discarding that frame (see The OpenXR Specification - you call xrBeginFrame without xrEndFrame)

More generally, this is a tricky problem. Some existing runtimes/legacy APIs have ways to provide some feedback about how much render power you’re using vs. is available. I’m not sure anyone has found a really good API shape for such a feature, though. One issue with them, besides being a lot of work on the runtime side to implement, is that you’re then providing (possibly delayed) data to the application with the intent that they run a control loop. From a runtime writer’s POV, my compositor/app pacing control loop may interact badly with your application’s completely opaque (from my POV) control loop and lead to oscillations (repeated dropped frames, going up and back down in quality, etc) or other bad behavior. This data is also really easy for an app to misuse/misinterpret. So, the risk is very high to the runtime, for uncertain reward.

1 Like

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.