Unusual input device support

§6.3.2 of the provisional specification states:

Runtimes must ignore input source paths that use identifiers and component names that do not appear in this specification or otherwise do not follow the pattern specified below.

This forbids runtimes from supporting various devices that diverge significantly from traditional xbox controllers or Vive wands. For example, flight simulation (a popular VR application) equipment like the Thrustmaster Warthog or the CH Throttle Quadrant frequently involves large numbers of inputs that cannot be reasonably represented within the standard device input identifier space, both because it they involve unusual data types (like a 3-position switch, or digital dial with several discrete settings) and because the sheer quantity and placement of inputs exceeds the allowed possibilities.

While an extension could introduce new input types, it is unrealistic for devices like this to rely on the willingness of major runtime vendors to implement support for functionally device-specific extensions to provide input identifiers to serve an audience they may not care about. Although details on the device plugin layer are scarce, the quoted language suggests that devices will not be permitted to define novel input subpaths. This imposes a chilling effect on development of and support for input devices with form factors and individual inputs significantly different than those already widely adopted.

It’s unclear to me what the benefit of this restriction is. Applications cannot enumerate unbound input sources, and should not care about the structure of the paths to sources that are bound to its actions. As it stands, it seems that unusual input devices will be forced to make compromises similar to the ones made for compliance with legacy joystick APIs: imitating xbox controllers, presenting themselves as multiple devices, and abusing the intended semantics of the signals they produce. These make it difficult or impossible to use such a device with an application that was not designed with it in mind.

OpenXR’s abstract action paradigm has the capacity to solve the traditional chicken-and-egg problem of application support for unusual input devices by rendering applications insensitive to input device details. However, without allowances for device plugins to introduce input subpath locations and/or identifiers unknown to the runtime, and for runtimes to allow such inputs to be bound to actions, the problem will have only been moved into the runtime, rather than eradicated. Can we expect the device plugin interface to provide this capability?

The language you quoted is about namespacing, and is not intended to restrict a runtime’s ability to support arbitrary devices. What we wanted to avoid was having a runtime add a /user/thingie path with a /input/foozle/click subpath path that then got in the way of adding a more general /user/thingie or /input/foozle to the core spec in a future revision. The same thing applies to interaction profiles, which is why those are namespaced by vendor name.

A single extension can add any number of user or input source paths. For instance, the hypothetical company BigCo could have an extension call BIGCO_LOTS_OF_PATHS that added:

  • Application can include paths to any device that is active on the system by using the /user/custom__bigco path. A full list of the available custom device paths can be found .
  • Custom interaction profiles for devices are published on the bigco custom device forum. Input source paths in those interaction profiles use the form …/input/custom__bigco/ for non-standard input sources and locations.

An application that wants to use those paths in its suggested bindings could enable the extension and use them freely. For applications without native “thingie/foozle” support, the runtime is still free to rebinding to thingies and foozles, it would just need to do it through an interaction profile that the application does support.

Or at least that was all the intent. If you spot something in the spec that prevents such an extension, or that prevents the runtime from rebinding under the hood, please point it out. Those are definitely things to fix by 1.0.

Device plugins are not in the 0.9 spec, and aren’t expected to be in 1.0. We support custom paths for input sources all the way from drivers to applications in SteamVR, and we will push for a similar capability in OpenXR device plugins. Custom /user paths aren’t in SteamVR, but they are a frequent request from driver developers, so they’re also something to keep in mind. However, it’s impossible to say for sure what will end up in the device plugin part of the spec.

If you want to help make sure this capability ends up in the device plugin API when it arrives, the best way to make that happen is to join Khronos and start working on that part of the spec. :slight_smile:

Thanks for the response!

My concern is that an extension represents a steep barrier for niche devices, particularly if the runtime must be modified to implement the extension. LittleCo is unlikely to convince, say, Microsoft to invest engineer time on LITTLECO_NICHE_PATHS. Some device vendors may not be interested in driving OpenXR support themselves, in which case device plugin development would fall to motivated users who would have even less standing, both with runtime vendors and with Khronos itself (what would be their vendor tag?). Hobbyists may wish to implement devices for vendors that are defunct, or develop their own devices from scratch.

I appreciate that there’s value to having input identifiers with well-known semantics, and to preserving room for future additions to that set. I don’t think such a strong restriction is necessary to accomplish that. Perhaps a dedicated “experimental” or “private use” namespace for totally unstructured input subpath identifiers would solve the problem?

This is distressing to hear. Forcing devices an application does not explicitly support to be contorted to imitate a supported device severely weakens the abstraction. Aren’t interaction profiles just there to support suggested bindings? The promise of OpenXR input seemed to be the obsolescence of the entire notion of “supported devices” save for the availability of built-in default bindings. If e.g. the user has configured an explicit input → action mapping for the application in the runtime, why should interaction profiles be involved at all? I imagined OpenXR applications often exposing actions for which a suggested binding is not even provided, making it inaccessible via any application-known interaction profile. The spec does not seem to indicate that interaction profiles are mandatory, so I suspect I’m misunderstanding you.

My hope is that applications will ultimately be concerned only with the states of its actions, allowing the runtime (and in most cases by extension the user) complete freedom in the mapping of inputs to them. This seems to be what the architecture is leading towards, but for these restrictions.

I’d be happy to contribute, but unfortunately my interest is personal; my employer is unlikely to sponsor a membership.

1 Like

My concern is that an extension represents a steep barrier for niche devices, particularly if the runtime must be modified to implement the extension.

Indeed! Allowing input device vendors to self-enable without getting buy-in from each runtime vendor is the key motivation for allowing input device vendors to implement their own input-focused OpenXR device plugins in the future. When that post-1.0 OpenXR device plugin extension is introduced, that general extension will itself define the rules for how new paths invented by a device plugin are namespaced and provided up into apps. At that point, LittleCo doesn’t need the runtime vendor to have ever heard of their device - they only need that runtime vendor to support the general OpenXR device plugin extension and then app developers are good to go to target their device.

This is distressing to hear. Forcing devices an application does not explicitly support to be contorted to imitate a supported device severely weakens the abstraction.

For OpenXR 1.0 in particular, device plugins are unfortunately out of scope. This means that in the 1.0 timeframe, any input device supported by a runtime is either known by that runtime vendor in advance or participates in that vendor’s own input extensibility extension (which will then define its own extension-specific rules regarding binding paths, interaction profiles and namespacing). When the generic device plugin extension is ultimately introduced, I expect you’ll see many devices just leveraging that general extension rather than relying on these early per-runtime extensions.

My hope is that applications will ultimately be concerned only with the states of its actions, allowing the runtime (and in most cases by extension the user) complete freedom in the mapping of inputs to them. This seems to be what the architecture is leading towards, but for these restrictions.

This action focus should indeed be the case for OpenXR apps today. A developer makes actions for its app and then suggests default bindings for the controllers and other input forms they’ve tested against. When a user shows up with some other controller or input form, the runtime is free to allow arbitrary rebinding of the app’s actions to that device.

The purpose of interaction profiles is to give the runtime enough context on the app’s assumptions to create a default binding to new controllers, without requiring manual user effort for every action. For example, an app may have only ever been tested on Oculus Touch controllers, and so it would declare suggested bindings for the /interaction_profiles/oculus/touch_controller interaction profile. If the user then turns out to be using a Windows Mixed Reality controller, the runtime can intelligently generate a default binding for that controller (e.g. map actions bound to the Oculus thumbstick to the physical WinMR thumbstick, map actions bound to the Oculus A/B buttons to physical WinMR touchpad segments, etc.). Meanwhile, if the app did test on Windows Mixed Reality controllers, it would have also declared suggested bindings for /interaction_profiles/microsoft/motion_controller, and so the runtime would just use those bindings directly. If the user instead has only a hand pose sensor and a 10-button belt, for example, and there is no obvious mapping from an Oculus Touch controller to those inputs, the runtime could give a popup to the user, asking them to help create a binding for the app’s unbound actions.

I believe the design goals of the OpenXR input system and the future device plugin extension are well-aligned with your goals here, but there could definitely be gotchas in the 0.90 design that would stop us from achieving those goals! Please do let us know if you see anything in the API or the spec text that would interfere today with the ability for an OpenXR 1.0 runtime to allow full and arbitrary user remapping of actions to input devices that were unknown to the app developer.

3 Likes

Thanks for the clarification! This is exactly the sort of reassurance I was looking for, and restores my confidence that the problem, and the benefit of a general solution, is well understood. Understanding that the “Runtimes must ignore [non-specified] input source paths” language is subject to being mooted by extensions, I’m not aware of any further systemic issues.

My one remaining concern, discussed briefly on Slack and reported on github, is that instantaneous and relative inputs do no have an obvious representation in the system, even though a relative input identifier, trackball, is defined, and allusion is further made to treadmills. These input types lend themselves to an event queue model rather than state polling, so that state changes between polls are not lost, and exact timing and rate of change can be preserved.