Display overlay without submitting frame possible?

I’m working on a OpenXR wrapper for OpenVR games. In certain games an overlay is displayed without submitting a frame and the overlay is updated irregularly, sometimes seconds apart. Commonly used for loading screens where the overlay is updated only when a progress bar moves. Is there any functionality in OpenXR that can support this, where the runtime takes care of displaying an overlay submitted once and updating the display accordingly while the VR device moves?

1 Like

I don’t think the core spec or commonly supported extensions has any way to do this.

One possible way to implement it might be to use the XR_EXTX_overlay extension, and create the overlay in a separate OpenXR session. Only the Monado runtime supports this extension though.

Would you also be submitting other frames at the same time while wanting the other ones to keep displaying?

If it’s just one, you can definitely do a quad layer. The runtime will synthesize any additional frames required.

You do have to call submit all XrCompositionLayer* with their swapchains every single frame you want them shown on the VR headset, but unless I’m mistaken, you do not have to render/write to the swapchains every frame.

The closest match to an OpenVR Overlay is an XrCompositionLayerQuad. This composition layer is submitted together with a pose relative to whatever base XrSpace you specify, and the runtime will render that layer at the appropriate position.

xrAcquireSwapchainImage is the function call that advances the internal index of the swapchain in a runtime. When you call that, the runtime will tell you a new index it will use for its rendering.

To render a frame normally you do something like

xrBeginFrame
[for every relevant swapchain]
  xrAcquireSwapchainImage
  xrWaitSwapchainImage
  [render something]
  xrReleaseSwapchainImage
xrEndFrame with composition layers referencing the relevant swapchain
xrWaitFrame

If you don’t have anything new to render you can skip the entire swapchain interaction

xrBeginFrame
xrEndFrame with composition layers referencing the relevant swapchain
xrWaitFrame

Thanks guys. This has to run on MS OpenXR runtime so unfortunately XR_EXTX_overlay is not an option.

Seems like the MS runtime doesn’t display extra frames for quad layers between submits. Forcing motion reprojection helps a little but doesn’t work well.

I’d been thinking about adding in extra begin and end frame calls but this could lead to issues if a new frame is submitted by the host application at the same time. I don’t want to add in any synchronisation either.

It’s not too bad on the games I have running that do this (rf2 / DCS). There’s a few points when the image gets stuck but mostly fine, so not worth the overhead of monitoring last frame submission, adding timers and synchronisation etc.

In openxr spec, the quad layer must be in the xrEndFrame in order to display it. So I don’t believe that’s a problem of the MS openxr runtime.

You can on the other hand submit the same quad layer texture frame by frame and the runtime will display the quad texture at the location you want it to be. You don’t have to render the quad every frame. There’s also a “static swapchain” concept in OpenXR that you can render the texture once and never change it. it can save a bit resource since you indicate the texture never need to be updated once submitted.

Actually, reading your feedback again. about this “MS runtime doesn’t display extra frames for quad layers between submits” i might misunderstood what you mean by “display extra frames between submits”. Do you mean you are expecting the app to submit one frame and the runtime simulate multiple frames when the app is running in low framerate? Do you mean the quad layer didn’t get display at all in these inserted frames? or do you mean the quad is displayed at incorrect location or jittering? Would love to know more details here.

It’s an odd use case because I’m trying to implement OpenVR using OpenXR. One of the things you can do in OpenVR is submit the equivalent of a quad layer to the compositor without the projection layer and OpenVR then displays that layer as though you were rendering a full scene at full frame rate even though you only do the one submit. So you can move your head around etc without the app having to do any more submits. You can then update the layer as an when needed. If you try and do the same in OpenXR the runtime will just go to the rotating spheres if an update takes too long, and will only do a form of reprojection in between sparse frames.

In openxr you can also submit a quad layer to the xrEndFrame and the runtime will try to render the quad layer at the best effort at it’s location. But for all xrEndFrames, you need to submit this quad layer, it’s not submitting once and forget. If you do submit the quad layer to xrEndFrame and it’s not displaying at the location you want it to be, it’s likely a runtime bug.

Thanks, but I’m not sure you’ve understood the difference between OpenVR and OpenXR behaviours here. There is no problem with where the layer appears after an xrEndFrame, it’s that you need to continue calling xrEndFrame to update the image whereas in OpenVR a single submit will be enough and the image will update as you move your head (only for overlays [layers]). As @haagch pointed out, you can carry on submitting the same layers without updating the swapchain, but as the render loop is driven by the host application I don’t have control over when new frame data is submitted so it would be problematic to safely generate extra submits.

I understand the key difference is on this “submit quad layer in xrEndFrame every frame”. and submitting the same swapchain for the same source image at each xrEndFrame is what I would suggest as well. Is it possible for you to keep the swapchain image of the quad during the frameloop if the app didn’t submit new quad layer, reuse the swapchain image for every xrEndFrame. When the app gives you a new quad texture, you then change the swapchain image?

I don’t have control over the frame loop. There is a direct translation between OpenVR submit() and OpenXR xrEndFrame(). The application drives that not my dll. The application will call submit once with the overlay [layer] and then not again. In OpenVR the handling of the overlay is done by SteamVR and the application doesn’t need to send any more submits() unless the texture for the overlay is updated.

Yes, adding in my own loop would allow that issue to be solved but it would also create a race condition between my xrBegin/EndFrame and the host app submits, and for performance reasons I don’t want to add in synchronisation primitives and have the overhead of monitoring the submits and starting up my own frame loop if needed.

I don’t believe the OpenXR spec defines what should happen in this case so maybe it will be runtime dependent.

This behavior is clearly specified in openxr spec, and all conformed runtime must follow this behavior.
https://www.khronos.org/registry/OpenXR/specs/1.0/html/xrspec.html#compositing

All composition layers to be drawn must be submitted with every xrEndFrame call. A layer that is omitted in this call will not be drawn by the runtime layer compositor.

Curious how does openvr API stop the rendering of a quad layer, does it have some other commands to stop it? I wonder if your layer can remember the quad swapchain of previous frames and insert in subsequent xrEndFrame layer structure? I assume the quad will always overlay on top? because without specify the order of layers in the end frame call, there has to be some hidden rules of layer z-order for painters algorithm.

Sorry but that is not clear. It is ambiguous around what the runtime should and can do in the event that another xrEndFrame is not submitted. It only clearly specifying what happens when you submit an xrEndFrame.

It will continue to display the overlay located in the position specified until you submit another frame. If the new frame doesn’t have an overlay it will no longer display it.

The OpenVR overlay system is completely separate from the scene application render loop. The overlays are controlled by their own API and are designed to be persistent. If you don’t explicitly hide or destroy an overlay, it will stay visible even after the application has exited. It will persist until SteamVR itself is turned off.

1 Like

Now I get the problem. In OpenXR it is assumed you always run an xrWaitFrame/xrBeginFrame/xrEndFrame. Without that cycle, the runtime will assume your application is stuck and might use implementation specific comfort options to try to make it look not jarring. You probably can’t really depend on consistent behavior to replicate the OpenVR Overlay system like that.

Does the application keep calling WaitGetPoses (which I assume is mapped to xrWaitFrame)? If so, maybe sneak in xrBeginFrame/xrEndFrame calls when it’s called again without a submit between them. If not, then I don’t think you have another choice but to run your own thread.

Yeah exactly. Unfortunately WaitGetPoses is not called again, but anyway xrWaitFrame needs to be linked with an xrBeginFrame: “An application must eventually match each xrWaitFrame call with one call to xrBeginFrame.”, so if we did get the WaitGetPoses calls it wouldn’t be an issue. I think I’ll just leave it as it is, no need to make the solution worse for the general case just for loading screens.

I do think there is room for the runtime to do this. All it would take is for the runtime to save the non-projection layer info and display it each display refresh. There is certainly scope for this to happen as otherwise you wouldn’t be able to do motion reprojection.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.