Feedback Thread: Khronos Releases OpenXR 0.90 Provisional Specification

Please leave feedback on this thread for the OpenXR Working Group.

March18, 2019 – 6:00 AM PT – Game Developer Conference, San Francisco – Today, The Khronos® Group, an open consortium of leading hardware and software companies creating advanced acceleration standards, announces the ratification and public release of the OpenXR™ 0.90 provisional specification. OpenXR is a unifying, royalty-free, open standard that provides high-performance access to augmented reality (AR) and virtual reality (VR)— collectively known as XR—platforms and devices. The new specification can be found on the Khronos website and is released in provisional form to enable developers and implementers to provide feedback at the OpenXR forum.

The OpenXR 0.90 provisional release specifies a cross-platform Application Programming Interface (API) enabling XR hardware platform vendors to expose the functionality of their runtime systems. By accessing a common set of objects and functions corresponding to application lifecycle, rendering, tracking, frame timing, and input, which are frustratingly different across existing vendor-specific APIs, software developers can run their applications across multiple XR systems with minimal porting effort—significantly reducing industry fragmentation.

The Khronos OpenXR working group was formed in early 2017 with the support and participation of leading XR companies. Throughout the development of the specification, multiple Khronos members have been developing independent implementations to ensure a robust and complete specification. Many of these implementations are becoming available for developers to evaluate including the ‘Monado’ OpenXR open source implementation from Collabora and the OpenXR runtime for Windows Mixed Reality headsets from Microsoft shipping today. Additionally, the Unreal Engine from Epic plans to continue to support OpenXR.

Links to these implementations and more information can be found on OpenXR Overview - The Khronos Group Inc.

“OpenXR seeks to simplify AR/VR software development, enabling applications to reach a wider array of hardware platforms without having to port or re-write their code and subsequently allowing platform vendors supporting OpenXR access to more applications,” said Brent Insko, lead VR architect at Intel and OpenXR working group chair. “The OpenXR provisional specification—together with the runtimes publicly available at launch and coming in the next few weeks—will enable hands-on, cross-platform testing by application and engine developers. The working group welcomes developer feedback to ensure an OpenXR 1.0 specification that truly meets the needs of the XR industry.”

Read the complete Press Release for more details or visit the new OpenXR website.

1 Like

I don’t see hand skeletons or hand tracking inputs in this release. Is there a view to adding them in the future?

The first release is intended to target the bulk of existing consumer implementations. Once we’ve established that core, new functionality will probably arrive as extensions first and then later rolled into core API revisions.

Asked on Twitter were two good questions:

https://twitter.com/cyannick/status/1107638570943168513

Khronos: “The specification and reference pages are available as is prototype runtime support. Simple example code to test and build upon is available from the Khronos OpenXR web site.”

https://twitter.com/AdvanceSoftware/status/1107630850647023617

Khronos: “Not in the core API for 0.90, but some features may come in later revisions. Additionally, as with other Khronos APIs, OpenXR is extensible. Feel free to discuss more on the Forums or Slack. Links on OpenXR page.”

How should an application detect momentary inputs, such as a click that both starts and ends in between calls to xrSyncActionData? Are runtimes expected to latch boolean actions until they are synced? Similarly, for relative analog data like trackballs, are runtimes expected to integrate data until it’s synced? If so, is there any way for an application to determine the duration of a momentary button press, or the velocity and duration of a relative datum?

The xrSyncActionData design seems strangely limiting in this regard compared to a more traditional event-driven input pipeline.

A post was split to a new topic: Problem getting Microsoft ‘Mixed Reality OpenXR Runtime’ app to switch runtimes

Does anyone know if getting 6DOF coordinates from devices (sensors/fusion) and picture distortion/color-space parameters is a simple process with OpenXR? I feel like this is the meat of VR software that right now is impossible for a noncommercial software to manage. I don’t want to say OpenXR is overbroad or over designed, but it’s hard to tell at a glance what services it’s providing. Since these are very simple things, that just require cooperation and organization from vendors and possibly hobbyist alike. (I say hobbyist because I wonder if companies like Sony will make their PlayStation peripherals work with OpenXR, and if not, will there be an unofficial channel to integrate popular devices without their manufacturer doing so. Because I can help with that if anyone is interested.)

It’s basically a Tower of Babel problem. I know there is a lot of bleeding edge technology out there to accelerate things. But the basics of the technology is quite simple. So I hope the API can be also.

The runtime is expected to manage distortion for you. Getting tracking data is straightforward enough, though there are a few different ways:

  • The spec contains example code for render views
  • Other tracking data (e.g. hand positions) should be obtained by creating an action space and locating it relative to some base space (typically the STAGE reference space) with xrLocateSpace.
  • You can use the same API to locate reference spaces directly with regard to eachother (for example, VIEW wrt. STAGE).

I think that is taking a too limited view, and may prove OpenXR unhelpful for many use cases. An application should be able to generate their stereoscopic screens and post them to the device’s display(s) without the involvement of OpenXR, since OpenXR cannot implement (and probably should not) APIs compatible with every graphics package in the world, existing or yet to exist.

I assume there must be a windowless operation mode, concerned only with sensor reports. However, if OpenXR is too restrictive, there will remain a side problem of implementing support for devices and use cases that don’t fit into OpenXR’s vision. And in that case, it will not function as a true, definitive bridge for VR applications. (But even if it can wrangle a subset of devices into one pathway it’s better than nothing. But if cannot render correctly, it won’t be of much use, unless it exposes device’s physical characteristics so we can render them.)

EDITED: Can anyone say if OpenXR is compatible with all graphics chipsets? Or is it one of those deals where it only works for newer, high-end graphics hardware?

I’m not certain what you mean by “graphics package” here. Hardware vendors can provide OpenXR implementations that support their systems’ optical requirements, and hopefully in the future we’ll have a device plugin interface allowing vendors to provide small drivers to bring support for new hardware to existing runtimes. Graphics APIs are slow moving in general, and you can reasonably expect Khronos to publish OpenXR extensions to support new ones as necessary. Extensions already exist for Vulkan, OpenGL, and DirectX 10 through 12.

Managing the complex, vendor-specific, and performance-sensitive process of compositing, distorting and presenting an image is one of OpenXR’s core value propositions.

Refer to the XR_KHR_headless extension.

OpenXR is not itself sensitive to the graphics device you use. It does not replace, or require invasive modifications to, the graphics API an application uses.

1 Like

FYI, I’m not receiving email notifications for this thread right now, even though Watching it.

My concern is if OpenXR is managing this, then I can’t easily do the distortion in the same shader that does other full-screen effects stuff. As a programmer I would prefer to implement the shaders with constants provided by OpenXR according to different models.

Direct3D 9 is a very good API. It’s much better than OpenGL and much more user-friendly than these other packages. By packages I mean like OpenGL. But even though it’s not provided, I think trying to make OpenXR an umbrella for everything like OpenGL under the sun is a bad idea, unless it’s trying to replace OpenGL completely. It looks like it’s trying to build an entire paradigm around mixed-reality instead of just providing a sane interface for hardware to expose its parameter through. As a developer what I need, is one interface to target instead of 30 for different devices I don’t have time to even hear about. That’s what I need. If OpenXR isn’t that, then I can only provide support for a one or two devices to end-users. I will pick the most consumer-friendly devices in that case. Which will be inexpensive, economical devices, that I worry OpenXR will not integrate with because they are not luxurious enough to be on its radar.

I also need not be bothered or asked to install software like Steam to be able to use devices. These are the things that’s keeping me from doing more than experimental work with VR, or just supporting Sony’s PlayStation VR for example. I will never work directly on the Oculus or Vive or others. I expect OpenXR to make those work, before I will ever take a look at their kits. And I expect OpenXR to be able to work with a Vive without installing Steam, other than if that’s what it requires to work as part of its installation, but my software must be able to work with just OpenXR code.

In any case, this is just what I hoped OpenXR would amount to. I’ve been looking for the specification to be published for about 3yrs now it feels like. Until now there was no way to tell what it even is. I apologize if I’m going overboard in this thread. Thanks for addressing my concerns. I’m a little squeamish to dive into preliminary OpenXR, not only because the new materials look pretty opaque at a glance to me.

EDITED: I’m just singling out Direct3D 9 as an example. Anything can render to a screen. It’s not cross-platform, but if you have Direct3D 9 code already, you’d use that. And I would prefer to use it to OpenGL, except for writing a wrapper around both. While some GPUs have VR feature sets. They are not commonplace or affordable. And I think independent, or noncommercial developers are unwise to push the limits of hardware to begin with, and so do not require, nor should require these dedicated acceleration features. At its base VR is very simple, and can work well on very inexpensive systems. It’s really not different from anything. There is just a mapping problem, that needs to be solved. If OpenXR doesn’t solve it, it will fall on the shoulders of developers like myself. (I really hope OpenXR’s vision is all inclusive.)

OpenXR itself demands very little support from graphics APIs and chipsets.
Mostly the requirements will be that the driver can render into a texture and pass handles to that texture around, for example OpenGL texture ids into an XrSwapchainImageOpenGLKHR. The specification also specifies a graphics binding struct to each of the graphics extensions like XR_KHR_opengl_enable where you have to pass a bunch of platform specifics to the runtime. This is something pretty much any graphics driver should support.

Runtimes are then completely free to choose how to get these textures displayed on HMD hardware and may require additional functionality.
Runtimes will most likely implement some kind of sharing of the textures with their own graphics context.
For example in our Monado runtime the compositor is Vulkan based, so it requires a working Vulkan driver. To run Vulkan based OpenXR applications on it, the VK_KHR_external_memory_fd Vulkan extension has to be supported, so the Vulkan textures can be shared across the application’s and the compositor’s Vulkan contexts. To run OpenGL applications on it, the GL_EXT_memory_object_fd OpenGL extension has to be supported for the same purpose.
But requirements like these are not mandated by OpenXR itself. I don’t think any production runtime will be doing this, but a runtime could also hijack the graphics context of the application to create a compositor window and on every xrEndFrame(), render the textures with distortion correction etc. applied to the compositor and then restore the state of the application’s rendering. This is something that could work on pretty much any graphics driver without any further requirements.

While you can use the headless extension to get mostly just input from devices, I don’t think there currently is anything in OpenXR that passes distortion correction information to the application.
To me this sounds like a good candidate for an extension. Another use case for this might be stacking APIs where an OpenXR runtime is wrapped to provide some other (future?) API on top of it without necessarily using the OpenXR runtime compositors.
As inspiration, Valve’s OpenVR contains a function where applications can sample screen coordinates and get the distorted coordinates as a result, from which it can build a distortion mesh. It also has a flag for submitting textures that says the distortion correction is already applied.

Looking more towards the future it sounds to me like you want more to directly load device plugins and implement those parts of a runtime that you need yourself. The specification for device plugins is not ready yet - it could still take a while. And if you go this route, you don’t get the automatic benefits the OpenXR runtimes provide - for example direct mode and reprojection/timewarp/spacewarp and the input binding stuff.

Currently - without device plugins - the situation is similar to graphics drivers . You can imagine OpenXR to be a specification how to communicate with VR hardware just like OpenGL or Vulkan is a specification how to communicate with GPUs. In that analogy, OpenXR runtimes translate OpenXR API calls to something a specific VR hardware understands just like n Vulkan driver tranlsates Vulkan API calls to something a specific GPU understands.
So if you write an OpenXR application using a WMR headset on the WMR runtime, someone with a different HMD can take this application and run it unmodified on a runtime that supports their HMD. I can’t speak for Valve but my guess is that once they have an OpenXR runtime with support for the Vive it will only be distributed through steam.

2 Likes

I wrote a premature reply. I’m trying to read the specification over. It’s a little unusual. I wonder what is required to develop a “runtime” by which I mean, is it permitted for anyone to do this? I.e. for any device(s). I wonder because so many VR devices have characteristics that are not unique. A head set is a monitor(s) for instance. Controllers are, well controllers. There’s just a lot of them. I wonder is a “runtime” just a wrapper around operating system discoverable devices? Or can it be? Otherwise it looks like every company with a VR product must implement OpenXR itself (for platform X) if their device is to work with the API. A lot of these are USB devices.

Yes, anyone is allowed to implement the specification, i.e. to write a runtime.

Most likely you are not allowed to claim that your runtime implements OpenXR unless it passes the conformance tests. There currently is no conformance test suite for OpenXR, so it’s a moot point right now, but I believe for 1.0 there will be something. You can of course still release a runtime that is not officially conformant, you just won’t have the benefits like described at API Adopter Program - The Khronos Group Inc

Yes and no. Yes, a HMD display is just a monitor, but it also has specific properties like the physical size of the screen, the distance of the eyes from the screen, the distance between the lens centers / IPD adjustment, etc. that dictate how you have to render so that the world appears at the correct scale to the user.
On the tracking side even just reading IMU data is not that straight forward, many HMDs store some sort of calibration data on the HMD and how you read and apply this calibration data is HMD dependent. And then of course you have lighthouse tracking with the Vive, camera based outside in tracking with Rift CV1 and camera based inside out tracking with Rift S and WMR which of course also have camera based outside in tracking for the controllers
OpenXR Runtimes abstract all this away, just like SteamVR or the Oculus runtime, but now they share the same API at the front end.
Runtimes may provide more features, for example a UI for setting up room scale tracking and a UI for setting up controller bindings should be useful.

At the moment it is likely that each of the big vendors will make their own runtime for the hardware they have, but there’s no reason a runtime could not support more HMDs as long as it knows how to interpret their tracking data and how to render to their screens. Our Monado runtime currently makes use of OpenHMD, libsurvive and a native OSVR HDK driver for HMD support. How to do this is currently runtime specific though.

Once the device plugin interface specification is finished and released, there will be an official way to encapsulate the pure hardware specific parts of a HMD driver and share them between different runtimes.

2 Likes

Thanks. I’m very aware of the realities. Which is why I worry OpenXR is not transparent enough. I guess what I’m really wondering (rhetorically) is if Khronos wants to limit “adopters” to paid members of Khronos. If Sony doesn’t implement OpenXR itself, there’s very good possibility I will end up implementing OpenXR for it, with some others not present involved also. I am involved with the COLLADA standards stuff, and so I wonder too if I might be able to do something on behalf of Khronos, but their actual staff is more like a skeleton crew.

EDITED: I guess I have to read some more to figure out if these runtimes are managed by a single OpenXR front end (I suppose that would be another runtime if so) but I don’t relish the idea of linking to more than one OpenXR shared library.

EDITED: The adopter program include “Pay the Adopter Fee:” ($11K) so out of the question for me. Unless I can get the webmaster to give us a pass. But that’s just for logos I think. (Sorry, I opened the provided link on the Adopter Program into a background tab and promptly wandered off earlier.)