Override headtracker information

Hi,
Can we override the headtracking information of a connected headset, so that we can use our own headtracker?
We want to use a headset inside a closed motion simulator. The headtracker of this headset uses (like most if not all headsets) the fusion of gyros, acceleration sensor and some optical sensor. But this fusion does not work because the cabin is moving. The gyros and acceleration sensors do not match with the optical sensor. Therefore we want to use our own headtracker. If the headset supports OpenXR, will it be possible to override the headtracking with our own sensor?
If so, does the time warping still works okee?

You would need to work with the runtime vendor to do this, since this is essentially a “hardware detail”, and there is no standard way in OpenXR to do this (overriding head tracker data from the application side).

One option would be adapting the open-source “Monado” runtime, if you’re on Linux or willing to fund or contribute a Windows port: monado.freedesktop.org

I hate to butt in but do you mind saying if this means (I’ve had some impression OpenXR is too restrictive to be useful) the app is not in control of the perspective shown to users? If tracking data is not required or the app wants to interpret it differently, shouldn’t the workflow be for the app to interpret the data instead of the OpenXR back-end?

OpenXR is designed to allow for producing high-quality immersive experiences. In order for that to work, especially in head-mounted systems, you need some fancy magic in a compositor (time-warp, etc) and tracking (prediction, late-latching, etc).

The user is in control of the perspective shown to the user. You can, of course, transform that so that the user’s head is in someplace unrealistic or fantastical - the user’s head pose is just a 6DOF transform/coordinate frame, which you can transform as you like - but at the end of the day the user’s head motions must map 1-1 into motion in the world or you’ll cause motion sickness. (Your world could be scaled up or down, don’t get me wrong, but the user’s motion would still be “consistent” in the world you are rendering) Head-worn displays are very picky like that - it turns out the brain is used to the world being where it expects, instantly upon any movement :smiley: (Other XR systems like CAVE-style projection environments have much less strict requirements for latency, etc. because they always have something at least approximately right available to look at instantly)

Even with that, there are some things you can do (e.g. ramping of movement velocity, forcing rotational movement of the view) that even with the requirement of rendering for the head pose and all the rendering warp magic, you’d still trigger motion sickness. And, there are ways to force such things that don’t produce sickness (some other kind of transition instead of non-natural rotation, or enticing the user to turn their head in some other way, or…).

There are some restrictions in OpenXR, mainly from a: the focus on a few specific hardware use cases for a 1.0 release, and b: requirements of high-quality immersive rendering. It’s not great (yet) if you want tracking data for some non-VR purpose, or if you have some improvised/extended setup with special usage requirements outside of mainstream (but extensions will help, and there’s an open-source runtime that works with at least some of these out-of-mainstream setups). But, if you’re trying to make an immersive experience to view on some kind of head-worn device, with hand and/or controller tracking, OpenXR should give you the tools you need to do that, while allowing you to ignore hardware-specific quirks and details so your experience will run long into the future on devices nobody has imagined yet. We’d love to know which gaps are most important to developers, to focus on those for extensions/improvement, so your feedback really is useful.

I’d be interested to know what restrictions you find most troublesome, to bring as feedback to the WG, and what use cases you have in mind. I myself am an advocate for weird and wonderful varied setups, some of which aren’t yet supported in OpenXR (I was raised as more of a CAVE dweller than an HMD user…), but I’m confident that the overall structure of the standard is sound and that they’ll eventually be supported through extensions, as soon as I or someone else gets time to write them.

But it feels wrong… like the vendors are being coercive… isn’t “time warp” just moving a 2D image around, and couldn’t that be done by sending some metadata to the device with each image as it’s posted to the the display? It just needs to know how that image is oriented. Almost everything around VR is being restricted as if they are game consoles, even though they are PC peripherals. My general sense of OpenXR is it’s been designed to support the vendors’ desire to lock their hardware into tightly controlled use cases instead of exposing them as regular controller and display devices. My hope was OpenXR would address these problems but it’s collaborating. (I don’t mind if it has a high-level representation but it needs a low-level too. A low-level API would be straightforward, if a little more finicky.)

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.