Current recommendations on developing hardware for OpenXR?

Hello,

I’ve seen that there is meant to be a standardised device plugin interface in a later version of the OpenXR spec, but it hasn’t been released yet. Is there any current recommendation for developing device drivers for OpenXR?

Thanks in advance.

If you’re looking to develop a headset, the recommendation is to write a runtime. You can start with Monado, the open-source runtime/runtime-construction-kit that I help lead at Collabora: https://monado.freedesktop.org

If you’re looking to do an input device, your best bet for now is to reach out to individual runtime vendors. I’d love to have a standard for input-device plugins, and one would be more likely (in my opinion) to be adopted, but there isn’t one yet. Monado would also be a place this could be prototyped if you wanted to help drive the effort.

Thanks for the response - I’m more looking to do input devices (specifically controllers).

Does this mean that a formal device plugin interface is likely not going to be part of the specification and instead is going to be adopted in some form of extension?

The way Khronos specs in general work is as a core spec with extensions. In some cases, functionality follows a path from vendor extension → multi-vendor (EXT) extension → Khronos ratified (KHR) extension, sometimes continuing to integrate into a future core specification: https://www.khronos.org/registry/OpenXR/specs/1.0/extprocess.html#_typical_extension_process_flow

In the case of OpenXR, in order to support multiple graphics APIs, all graphics API-specific support is in extensions, so you can’t actually have a useful runtime without at least providing one extension (a graphics binding or headless). Extensions allow things to be optional: I imagine that while some runtime vendors might implement an input device extension, not all would (think of all-in-one devices or other resource-constrained things). Additionally, providing an input device would require a very different API than the core app-focused API, so it’s likely that e.g. a device plugin might only use xrCreateInstance and some accessory functions (xrStringToPath, etc) from the core spec and otherwise use a separate set of functions. (I’ve made an input device API for an older pre-OpenXR VR API, OSVR, and it is possible to make it pretty simple device APIs. There’s also the prior work of the SteamVR interfaces, though we’d avoid the headset-related APIs.) It’s possible it might be a non-extension, optional part of a future spec release, but given that our model for optional parts of the spec is an extension, I’d anticipate a KHR extension to be the end goal, even though its usage would differ from how the app-focused extensions would be.

Thank you, that makes a lot of sense.

I’ll take a look into Monado and see if I am in any way able to help contribute to an extension for input devices, I’d be interested in helping with it.

Hi @ryliepavlik,
We have built CAVE VR hardware for 6-years (running on Windows) and would like to engage OpenXR moving forwards. We have had a look at Monado, but it seems to be Linux based for the time being.
Do you have any suggestion on how we might proceed. Currently we support Unity (using our custom camera rig prefab), Spout/Syphon. Would we be able to build a Unity app that supports OpenXR as a go-between?
Would love a nudge in the right direction!
Kind Regards
Max

1 Like

Monado does run on Windows as well, it’s just not as polished there yet due to lack of customer and lack of R&D time to build on it. My employer does contracting to build on Monado, among other things, so if you have timelines you want to meet, etc. do get in touch and we’ll see what we can do. Otherwise I’m happy to help in public forums like this on my R&D time when possible.

In Unity, you’d replace your custom camera rig prefab with a normal XR one, ideally. (you wouldn’t be able to make a compositor in Unity that exposes an OpenXR runtime, at least not easily or efficiently. Much easier to start with Monado orHowever, you’d need Unity to add support for whatever OpenXR extension we’d come up with that would add the CAVE-like “form factor” and a more generic “view configuration” (right now it’s just handheld AR magic window and headset stuff, but several of us want a “bag of views” which would be what you’d need for some projection or large-display immersive environment). An alternative option would be “emulating” a headset: basically advertising the head-mounted stereo form factor, and reprojecting it in the compositor to work on a cave. Though that wouldn’t get you all the benefits of a cave, it would allow you to run unmodified OpenXR apps (ms flight simulator, minecraft, zombieland, …) on your caves. You would probably ideally want both (the headset emulation mode and a “native” cave mode).

Closer on topic of this original thread: I don’t foresee an extension that would let you reuse someone else’s runtime because the compositor is a pretty core part of things, and everybody making a “main” xr device (not a peripheral) wants to control the compositor. It’s quite possible to bolt a custom compositor on Monado, otherwise we’d be happy to integrate cave support into the mainstream open source release too. (Please give me an excuse to expense some used high end projectors and shutter glasses :wink: )

Hi @ryliepavlik,

Thanks for the detailed response. To say that I am new to OpenXR and Monado is accurate (started looking at it on Friday last week when I joined this forum).

Is there a good beginners guide out there so that I can get up to speed with some of the terms you have used. The OpenXR pages give a lot of information but lack a good high level overview of functional blocks required i.e. a compositor, and how to build/publish an extension.

Looking forward to understanding more!

Regards
Max

All the OpenXR official docs as well as repos are linked from Khronos OpenXR Registry - The Khronos Group Inc - see extension process guide and style guide for writing the spec for an extension. The other parts (how to actually write a runtime), not sure if there’s a good guide anywhere. You need to be able to do tracking and input, accept frames, distort and possibly timewarp those frames, pace the application… Not honestly sure how I learned how to do it, but the “must:” 's in the spec (that aren’t about things like return values) are a starting point for requirements. There’s intentionally very little about the “how” of writing a runtime in the spec, only the “what” it must do, for both IP reasons and for flexibility, allowing runtime vendors to experiment to find the best techniques while still exposing a consistent interface.

Starting from Monado is probably the most practical thing to do, but my colleague @haagch did publish some minimal starting code to give an idea of one way you might begin if you really wanted to write the entire thing from scratch: Christoph Haag / openxr_runtime_template · GitLab Looks like he neglected to put license headers on it, so until that’s fixed it’s “look but don’t touch”. But again, starting from Monado (which is intentionally designed as more or less a “construction kit” is probably the best route. A somewhat high level view of how Monado fits together can be seen here: Monado OpenXR Runtime: Understanding and Writing Targets: Connecting the Pieces

Right, I put the BSL license there now, just like Monado. But it’s supposed to be “public domain” example code, feel free to do whatever you want with it, or just ask for another license if you legally need one.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.