Motion Canceling for 6DOF Motion Simulator

The Motion Canceling is already done in OpenVR under Steam.
Is there a possibility anybody can create it under OpenXR?

I need this! Without OpenXR the FPS in low under Steam. And without Motion Cancelation the Immersion is massively destroyed.

Agreed. The entire motion platform community that uses VR will need OpenXR support as the industry is moving towards it. OpenVR is the only way to currently utilize OpenVRMotionCompensation (OVRMC) which is developed by a single person.

To continue expansion of VR support for motion platforms, OpenXR support is critical. SimRacingStudio has utilized OpenVR to work with motion platforms, and could potential help in development if needed.

If there is any specific guidance and/or people knowledgeable regarding motion compensation or motion cancellation, it would be greatly appreciated to respond in this thread to begin the discussion of support.

Thanks!

I made a pretty detailed write up here of what needs to be done to write an OpenXR API layer to do it:

What’s needed now is a developer willing to learn about OVRMC and do the integration.

Thanks for the writeup. I’m not 100% sure a layer is the right way to go, or that making one big layer with all the features is the most maintainable, but it’s certainly an approach, and probably the only one that will do something whether or not the runtime vendor cooperates. I think probably the ideal case would be a multi-vendor extension to supply motion cancellation data to cooperating runtimes - a limited-purpose device plugin API basically.

Not saying you’re wrong, but the API layer can be done in a week if someone looks at it full time, and will work on all platforms.
The Khronos cross-vendor EXT route and waiting for every vendor to implement it will take months, if not years.

1 Like

Having done some tracking work, I am just infinitely skeptical that “simple motion cancellation” by removing a transform will provide a good experience without integration into the vendor’s tracking system. The motion base is essentially a control input into the model that I’d think would have a more profound effect.

That said, reality appears to disagree with me because people are already doing this, as well as using apparently-mostly-stock HMDs in weird places like roller coasters that also have control inputs… I haven’t found anybody at MS to fess up whether or not they had to change the tracking algorithm before sending a HoloLens 2 to the ISS, though :wink: (Every time I use a gravity constant in tracking code I think “this isn’t very portable” :smiley: )

As I understand it the concern would be that in an API layer you can control the poses that the application receives from the runtime and you can fix up the poses that are submitted with the layers but if the runtime internally relates to head/controller poses, an API layer can’t do anything, the most obvious example being reprojection.

It seems to have been working with that approach for OVRMC: How to use | OpenVR Motion Compensation (dschadu.de)

Sounds to me this can be solved by the API layer applying the reverse-transform of the cancellation applied in xrLocateViews() to the projection layers poses in xrEndFrame(). This should be sufficient to trick the motion reprojection.

Some said it wasn’t possible, but it’s here: OpenXR-MotionCompensation

Proof: !! WORLD FIRST OpenXR Motion Compensation Alpha !! - ACC | 6DOF Motion - VR HP Reverb - YouTube

PS: Kudos to BuzzteeBear for taking on this effort and getting it working in just a month.

2 Likes