Buffered haptics in future spec

Will an upcoming OpenXR spec provide an interface for playing a buffer of haptic feedback?

In the provisional spec, a base XrHapticVibration struct is available for requesting that the runtime play haptics. However, it only gives room to specify the length and frequency of the vibration. If instead an application wants to play a clip of haptic feedback, I’d expect it would need to do something hacky like xrApplyHapticFeedback(...); nanosleep(...); xrApplyHapticFeedback(...); nanosleep(...); etc., breaking the clip into samples and blocking the thread in between them. And since playing the clip could easily take longer than the time to the next frame, this might have to be done on a separate thread. With a buffered interface, the application could just fire off xrApplyHapticFeedback(...) once with an XrHapticVibrationBuffer and forget.

The Oculus SDK already provides buffered haptic playback for its Touch controllers, and Unity has provided a HapticCapabilities.supportsBuffer flag since its 2018.3 release. An OpenXR interface could be particularly helpful for providing a common interface for current and future devices that permit audio-rate control, like haptic vests à la Subpac and Tactsuit. If I understand correctly, this could be an extension or work with the future device plugin interface. I thought it could benefit from a standard interface, though, since from an application’s point of view, a buffered haptic device just looks like a speaker/DAC it submits buffers to.

I can’t speak for Oculus here, though I would generally expect runtime vendors to provide extensions over time for runtime features they support that are outside the scope of the OpenXR 1.0 spec. Certainly as Microsoft, we will be supporting extensions for all the features of HoloLens 2 that are outside the OpenXR 1.0 scope (e.g. hand tracking, eye tracking, spatial anchors, spatial mapping, world-scale reference spaces, etc.), with the goal of having as many of these as possible be cross-vendor EXT_ extensions.

One catch with quickly standardizing buffered haptics is that the details of the buffers will often be particular to a given haptic motor. For example, the Oculus SDK’s buffered haptics accept 320Hz samples. This is not an accident - the Oculus Touch controller’s haptic device resonates at 320Hz and 160Hz and the Oculus guidance around constructing haptic buffers takes specific advantage of this non-linear frequency response.

As OpenXR apps will be targeting a range of haptic output devices from many vendors, we’ll need to be thoughtful about how we design general buffered haptic support, ensuring that runtimes are in an informed position to adapt the app’s intended haptic output to the frequency response of the target device. I am confident this is a solvable problem - we just didn’t want to block the OpenXR 1.0 release on getting to a cross-vendor solution here.

3 Likes

That’s a fair approach. Definitely agree that thoughtfulness is key.

Is there a particular reason why the provisional spec doesn’t address audio at all? If it did, I’d imagine that would provide a nice starting point for certain haptics extensions.

Reprojecting graphics frames optimally tends to be very headset-specific, taking into account the specifics of its optics and display panel, as well as the details of a given runtime’s XR compositor. This is why OpenXR has the runtime manage the session’s frame timing and frame submission on the app’s behalf. This does involve some complexity, as it means the OpenXR spec needs individual graphics extensions to expose runtime compositor support for each graphics API such as Vulkan, Direct3D, etc. However, by abstracting runtime management of frame presentation in this way, an OpenXR app you build can do low-latency reprojected rendering today, and will then be forward-compatible to a wide range of future headsets even if you don’t update your app.

In contrast, HRTF algorithms suitable for VR headset users wearing headphones tend to be far less headset-specific than the distortion pass for reprojected graphics frames. At the same time, developers building 3D apps tend to have familiar audio tooling they prefer to use, such as Wwise, FMOD, etc. Providing a standardized OpenXR 3D audio API that was either full-featured enough to replace such audio tooling or could integrate deeply enough with each popular audio library to meet developer expectations would have been another huge chunk of API to standardize, and almost certainly would have delayed the spec ratification.

With the scope of OpenXR 1.0, we believe you’re unblocked to do great HRTF audio for desktop VR using your favorite HRTF-enabled audio engine, moving the listeners and emitters in your audio scene each frame to match the tracking data you get from OpenXR. If you see any gaps there, please let us know!

1 Like

Rather than providing a full 3D audio toolkit, it might be useful for OpenXR to have a (much simpler) mechanism to identify the appropriate audio device to use for a given system in platform-specific terms, or perhaps to stream out (already fully processed) audio to the appropriate device directly, with suitable queries to distinguish headphones/earbuds fixed to a tracked device from e.g. a static surround sound system.

1 Like