What is the right way to implement single pass rendering with openXR?

The official hello_xr example code for using the OpenXR repo uses multi pass rendering - a technique which is most of the time nowadays super bad practice (inefficient). And I don’t see any talk on here or github or any docs about single pass.

The way things seem to work with openXR is:

  • for each subimage swapchain
  • call the currently active graphics plugin (e.g. opengl, vulkan etc.) via m_graphicsPlugin->RenderView(
  • implement your shaders & geometry draw calls in those plugins

And openXR takes care of which device you are running on and what its display specs are etc.

Apparently for single pass you are supposed to use multiple views in the same swapchain, so you don’t call RenderView twice; using same swapchain handle in XrSwapchainSubImage for both views, and using different image rects.

But I can’t find any docs.
How do I use different image rects? How does the per eye information get plugged into this? (e.g. fov, IPD etc)

Has anyone made an example where this is updated for single pass? openxr_program.cpp:

        // Render view to the appropriate part of the swapchain image.
        for (uint32_t i = 0; i < viewCountOutput; i++) { 
            // Each view has a separate swapchain which is acquired, rendered to, and released.
            const Swapchain viewSwapchain = m_swapchains[i];

            XrSwapchainImageAcquireInfo acquireInfo{XR_TYPE_SWAPCHAIN_IMAGE_ACQUIRE_INFO};

            uint32_t swapchainImageIndex;
            CHECK_XRCMD(xrAcquireSwapchainImage(viewSwapchain.handle, &acquireInfo, &swapchainImageIndex));

            XrSwapchainImageWaitInfo waitInfo{XR_TYPE_SWAPCHAIN_IMAGE_WAIT_INFO};
            waitInfo.timeout = XR_INFINITE_DURATION;
            CHECK_XRCMD(xrWaitSwapchainImage(viewSwapchain.handle, &waitInfo));

            projectionLayerViews[i] = {XR_TYPE_COMPOSITION_LAYER_PROJECTION_VIEW};
            projectionLayerViews[i].pose = m_views[i].pose;
            projectionLayerViews[i].fov = m_views[i].fov;
            projectionLayerViews[i].subImage.swapchain = viewSwapchain.handle;
            projectionLayerViews[i].subImage.imageRect.offset = {0, 0};
            projectionLayerViews[i].subImage.imageRect.extent = {viewSwapchain.width, viewSwapchain.height};

            const XrSwapchainImageBaseHeader* const swapchainImage = m_swapchainImages[viewSwapchain.handle][swapchainImageIndex];
            m_graphicsPlugin->RenderView(projectionLayerViews[i], swapchainImage, m_colorSwapchainFormat, cubes);
 
            XrSwapchainImageReleaseInfo releaseInfo{XR_TYPE_SWAPCHAIN_IMAGE_RELEASE_INFO};
            CHECK_XRCMD(xrReleaseSwapchainImage(viewSwapchain.handle, &releaseInfo));
        }

No. Not in general. It depends on your requirements, use of multipass, and target hardware.

Perhaps you have a specific context in mind?

Hello,

There are a couple of basic ways you can do single-pass rendering (and problably other methods as well):

  1. You can use texture arrays (arraySize=2 when allocating your swapchain) and have your shaders render to different indices in the array (eg: SV_RenderTargetArrayIndex). When submitting your XrCompositionLayerProjection, you will then specify the same XrSwapchain handle for both left and right views, but with a different imageArrayIndex in the XrSwapchainSubImage. The Microsoft BasicXrApp will show you how to do that with Direct3D: OpenXR-MixedReality/samples/BasicXrApp at main · microsoft/OpenXR-MixedReality (github.com)

  2. You can use a double-wide swapchain, allocated with width being twice the per-eye resolution, and use the imageRect to specify both views within the same swapchain when submitting a frame (this is the method you started describing above). You don’t need to worry about FOV/IPD, because when filling up your XrCompositionLayerProjection struct, you still specify the FOV and eye poses per-view, and the only overlap will be using the same XrSwapchain handle but with a different imageRect. I am not aware of any existing sample code that does this, but you can still take a look at the sample code above for texture arrays, and it will show you nearly the same thing: instead of submitting the same XrSwapchain with two different values of imageArrayIndex, you will specify two different imageRect.

Hope this helps!

Yes that partially helps! :smiley: I will reply with my progress. Still find it very strange that there is no sample for these strategies, as the literal industry standard for games (and game engines) in VR is to use single pass or else they choke performance wise…

As a matter of fact it is even very hard to search for OpenXR SDK stuff online generally, because all results come up for unity and unreal and valve.

1 Like

hmm interesting reply. Could you please give some rough examples of these general situations when you want multi pass? Because there is usually no gamedev situation when you would want not-singlepass…

Multi pass (ie no sharing) is only good if you have completely different things to draw to each target. So obviously yes if you are overlaying camera feeds and/or doing AR layering, but for 3D geometry and/or drawing anything to a pair of eyes, you want both eyes drawn in the same pass (sharing data), pretty much always as far as I can imagine…

Strange to my eyes that this concept is so sparse in the khronos community, builtin features, and docs :slight_smile: (…like roughly 0 mentions, despite huge companies doing it; we even have gpu hardware support)

The paper Fast Multi-View Rendering for Real-Time Applications compares a few methods for multi-view rendering. I’ve only skimmed it, but the results seem to suggest that there isn’t necessarily just one “the right way” - as usual :wink:

Just want to point out that this paper is actually nice but it’s not about “multi pass rendering”, or multi pass vs single pass.

“Fast Multi-View Rendering” refers to all sorts of optimizations, but actually involving reusing/sharing buffer data (ie pretty much single-pass)

E.g. this bit from the paper, listing things which are btw established tech used by game engines and zuck’s meta and with support for it built into GPU’s etc etc:

describes an MVR pipeline which uses a
single pass and instanced rendering for geometry amplification,
forwarding the output to a large, partitioned framebuffer



Either way, my point was that what’s provided by openXR sdk is not as production ready out of the box as one might think (it’s not like it comes with some viable “fast multi pass” alternative to single pass)…

The only provided example, hello_xr is -o-l-d- and isn’t helping you on the path of actually how you’d integrate openXR.

I found this excellent 1-file example code that really explains what’s going on at every step: https://github.com/KHeresy/openxr-simple-example/blob/master/main.cpp (opengl)

Definitely recommend for first time orientation.

Does not do single pass though. But it covers everything, including input actions and poses!

For an excellent single pass implementation check out janhsimon’s recent boilerplate project with openXR and vulkan. It uses a single swapchain with VK_KHR_multiview images (which are also available in webGPU and OpenGL) (check out the answer on issue #1 for a great description)

Integrating OpenXR is very hard / needs a lot of minutia dealt with in a correct way, and you’re not sure what “correct” is. So I super encourage these open boilerplate and example projects because sitting in your basement trying to do it yourself will take a long daunting time at best.

ah, very cool, that’s a fork of my colleague’s Linux-only example from here: https://gitlab.freedesktop.org/monado/demos/openxr-simple-example/ It is very useful in understanding the general concepts of OpenXR and the steps you’d probably go through, even though you would almost certainly not structure your own application as essentially a single function of straight line code.

While I wouldn’t in general look to the conformance tests for guidance in how to use OpenXR or in best practices (since their goal is to test behavior of the runtimes in the common and uncommon corners of the spec, for both valid and invalid behavior), I do know that the CTS tests the three common render setups: one swapchain per eye, one double-wide swapchain using subrects, and a swapchain that is an array, one slice per eye.

Hello-XR is in fact not old, it’s regularly maintained and updated, but it does have a challenge as the most prominent non-engine example code. We chose to demonstrate the general process of how you might write a multi-API, multi-platform app in OpenXR, as well as an app that would very easily run on every OpenXR platform. However it’s definitely not optimized or sophisticated in terms of rendering (much like the namesake “hello world” applications are usually not sophisticated in terms of terminal control and text rendering) as there are better examples for that - plus it tends to be highly graphics-api-specific. Just different objectives to optimize for: a different point in the space of “sample OpenXR apps” than e.g. the “openxr-simple-example” discussed earlier.

It is open-source, btw, so you are welcome to submit changes to add “multi-view” rendering support, for review and possible merging, if it doesn’t distract too much from the underlying OpenXR APIs.

And yes, for better or worse the bulk of reference material out there is relating to the engines: they do represent the bulk of development on OpenXR. That said, we are working to improve our app-developer-focused documentation as well. There is actually a chunk of docs out there but it tends to be scattered across vendor web sites right now. We had a list of references and links at one point but I couldn’t find it last time I looked.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.