How to set Oculus touch vibration through native OpenXR when there is also a Unity plugin?

A very weird requirement.
The project I am working on requires using Unity OpenXR plugin to handle the HMD/controller tracking, rendering scenes, while using OpenXR native (C++, which will be compiled with other C++ code into a giant dll and loaded by the Unity) to trigger the controller vibration.

(We are using Oculus Quest 2 and the Oculus touch controllers, and use USB cable to connect the HMD to PC and running the application on Windows.)

However, I can hardly remove the rendering part from the C++ code since there will be an initializing check. I would be very grateful to a minimum example which only triggers the Oculus touch vibration without rendering/tracking, or a walk through on how to disable the rendering part while also keep the vibration working.

It sounds to me like your C++ code creates an OpenXR session. AFAIK there can be at most one OpenXR session at a time and in your case you’d end up with one from Unity and one from your C++ code.

Assuming that you are using a Native plug-ins (unity3d.com) - and you did say “will be compiled with other C++ code into a giant dll and loaded by the Unity” so I think this is a correct assumption.

  1. You could go the “polling way” by having your native plugin expose a function to Unity C# scripts telling “hey send haptics pulse ASAP”. This would be polled in the Update() of a MonoBehavior for example, which means up to 1 frame of latency.

This would be the simplest way to implement.

  1. If latency is not acceptable, then you can reuse the Unity OpenXR session and kick off the haptics yourself as explained below.

In order to invoke xrApplyHapticFeedback(3) (khronos.org) from your C++ code, you need 2 things from Unity: an XrSession handle and an XrAction handle for the haptics output.

You can retrieve the XrSession handle from the OpenXR Features | OpenXR Plugin | 1.8.2 (unity3d.com), by overriding the base method OnSessionCreate() and storing that handle for later usage.

You can retrieve the XrAction handle for your haptics action through GetActionHandle(). You will still need to create an action in the Unity editor for haptics, and it is the inputAction to pass to the function.

You can then pass the XrSession and XrAction handles to your native plugin and perform the call to xrApplyHapticFeedback() yourself from the C++ code at any time.

Yep. That is the problem, or as long as I think that is where the problem is.
So I am thinking about using OpenXR native code to do all the tracking and hardware handling, while only use Unity for rendering (but it seems that I still need to figure out a way to display on the HMD?)

Any suggestions?

Yes I will use native plugin.
Something I am still confused about:

Solution 1: sounds like I am making an OpenXR unity plugin…?

Solution2: so, the let’s say that if a haptic pulse is intended to be triggered in C++ code, it will call an event in Unity, and the Unity will call OpenXR Unity plugin, and pass the request to OpenXR session, then go to the hardware.

Sounds like the solution 2 would introduce even more latency?
Or, is the latency better since there is no frame updates, so the function is simply called immediately?

PS:
I found this GitHub - maluoi/OpenXRSamples: Concise and documented examples of using OpenXR to build native Mixed Reality applications!
But the example is still too bulky (and it is C style…)

Is there a more simplified example that only deals the following things:

  1. Initialize the hardware on window.
  2. trigger the vibration on controllers.

Not sure what makes you say that? You’ll already have your “giant dll loaded by Unity”, all you’ll be doing is adding 1 extra DllImport to a function implemented in the DLL, then use regular Unity C# constructs to kick off the haptics, like you app would do even if no DLL was involved.

That’s not what I proposed. You would be doing the work on the Unity side (C#) to retrieve the XrSession and XrAction one time at initialization. Then hand over these handles to your DLL. You DLL can then at any time invoke xrApplyHapticFeedback() to kick off the haptics, without any latency at all. Unity isn’t involved at all past initialization.

You should give up that route for 2 reasons:

  • Most runtimes do not support “headless sessions” where you don’t submit graphics
  • Most runtimes will not support 2 concurrent graphics sessions

So going down the road of creating a second XrSession outside of Unity is doomed.

If you follow solution 1) above then you won’t need to touch any OpenXR APIs at all, and with solution 2), you will only need xrApplyHapticFeedback() which is pretty straightforward to call.

Thank you!

Well after some attempts I have to admit I am not good at this.
I read through the OpenXR Features documents you listed and try to make use of it in the VR template Unity project.

However, I have never used something like this before so I am confused. Is there some kind of examples (A full project would be nice) or something else I’d better read to get a better idea on what I am supposed to do?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.