Get controller positions

Hi !

I am developping a VR app and I base on the HelloXR sample with C++.
I have to log the position of my headset and my controllers but I don’t know how to proceed. Can u help me, please ?

Thanks a lot for answers !

1 Like

Headset position: you’ll need xrLocateViews to render, and can log those too.
Controller positions: you’ll need to set up pose actions for the controllers, and create an action space for each, then use xrLocateSpace.

1 Like

To get any pose you need two spaces:

  1. The space in which you want to read (stage/roomscale or local).
  2. The space for the head and controllers you want to read.
  3. The views for eyes.

Ref 1. This is your main space, relative to which you will read all poses. Create it with xrCreateReferenceSpace using type XR_REFERENCE_SPACE_TYPE_STAGE or XR_REFERENCE_SPACE_TYPE_LOCAL

Ref 2. Head and controllers require spaces to be created in a different way.

Head: Create it with xrCreateReferenceSpace with reference type XR_REFERENCE_SPACE_TYPE_VIEW. This one is pretty straightforward.

Controllers: You need first to create:

  • action sets (xrCreateActionSet, set that you name in any way you want, eg “gameplay”),
  • actions (xrCreateAction, which define what kind of input it is, for hands it is XR_ACTION_TYPE_POSE_INPUT and name it),
  • paths (xrStringToPath, to get pose location, you need to create separate paths for both hands, aim/grip, more on that later),
  • bind them together and provide such bindings as suggestions (xrSuggestInteractionProfileBindings, this is done per controller type, you have to bind action and path together),
  • attach created action set to session (xrAttachSessionActionSets)
  • create action space (xrCreateActionSpace using again action and path, similarly as binding suggestion)

Ref 3. You may first want to call xrEnumerateViewConfigurationViews to get the number of views. You don’t have to configure/create views in any way as all info about them is provided when you read them (see below).

Now, when you have it done, you may read locations.
First you need to sync actions (xrSyncActions).
Then read head and controllers using xrLocateSpace with xrSpaces created earlier.
Eyes are read using xrLocateViews. You just provide how many + a pointer where data should be stored.
Both xrLocateSpace and xrLocateViews require the time at which you want to read the values.
Note that if for any reason you’d want to read default eye offset relative to the head, some OpenXR implementations allow reading at time = 0 and some throw an error.

I haven’t found a way to read eye offset before fetching views via xrLocateViews.

5 Likes

Nice summary.

Interesting. The OpenXR™ Specification says
“Unless specified otherwise, zero or a negative value is not a valid XrTime, and related functions must return error XR_ERROR_TIME_INVALID

Eye offsets, ipd etc. are not really a thing in the OpenXR API. They were somewhat sensible in the past where HMDs strictly had 2 parallel displays and rendering was essentially once for head pose - (ipd / 2) and once for head pose + (ipd / 2), but OpenXR is trying to be more generalized. Displays are not always parallel anymore, perhaps a physical IPD slider might adjust the view poses, FOVs may vary between left and right eye, and we may get even more esoteric display arrangements in the future. You can always calculate the differences between view space and the views from xrLocateViews but be sure not to make undue assumptions on the display arrangement.

1 Like

This is true. Yet OpenXR for Oculus PC allows 0 time and returns XR_SUCCESS. It’s a trap :smiley:

I should be more explicit, my bad. By offsets I meant not just position offset, ie vector. But the whole transform + fov. This shouldn’t change frame to frame, at least too much. The reason is that in some cases you may want that pose offset + fov before actual rendering, when you prepare render scene while the previous frame is being rendered. But as I predict no view is going to change every frame, it should be fine.

Having said that, I should support more views/eyes + change the rendering setup a bit to accommodate this differences.

1 Like

If you ever want to run on something eye-tracked, the fov and position may very well change frame to frame. Not to mention if we get around to doing something about fishtank/CAVE eventually…

1 Like

In the spec this is only mentioned in passing but you can call xrWaitFrame twice from different threads and perform rendering work on both frames in parallel, that’s being called “pipelined” rendering / frame submission.

2 Likes

That’s if eye-tracking is handled solely by the headset and I can imagine some exotic curved display that requires multiple views to be rendered and then the headset combines all these views into something that is displayed.

Besides that, already between preparing and actual rendering the views may change a lot (FOV remains) but with as large FOV as now with a single view per yee, it’s rare that something may be skipped when rendering and getting noticed. With many more smaller views it may be more apparent or scenes will always have to be prepared for much larger FOV and some further selection done while rendering.

1 Like

I indeed must have missed that. Thanks! :slight_smile:

1 Like

Thanks for all your answers. Finally, I found my solution using pose actions and actionspaces before receiving your answers.
Now, I have my controllers represented by cubes on my VR scene and when I click on ‘A’, a line is drawn from one cube to see where I can telepot myself.
However, I don’t succeed to get the coordinate of the 2 second vertice of my line to do the teleportation. Any idea ?

Thanks a lot for answers !

1 Like

Excuse me, why is there no ray on my handle

1 Like

The most trivial solution would be to calculate the intersection between line/segment and plane at your feet level. But this will allow going through walls, etc.

You may require to use something more elaborate, like throwing ray against world collision and finding a spot using navigation mesh.

Do you mean that a ray is not rendered? Ray is not a part of the controller mesh for two simple reasons: 1. you may want to render just a controller, 2. you don’t know how long the ray should be.

I added rays in the application layer. You can have rays. Can’t you add rays in runtime? Or is ray developed at the application layer?

There should be no problem with adding them in runtime. It really depends on the engine on how to do it (it can be a mesh, can be scaled or the length adjusted via vertex shader or it can be some custom draw function).

I would prefer to have them in the same layer as controllers.

For the unreal engine I use, I send the 6DOF of the handle to the runtime, and the runtime sends it to unreal through xrlocatespace. The handle can move, but there is no ray. The handle is the default handle type in unreal, which is inconsistent with my actual handle appearance. If i want to do ray in runtime instead of in the application layer, i need to provide interaction_ profile according to what unreal says? How to provide this document? (interaction_profile)

I see… how can I get the feet level ? I think that getting the headset position to get its z height is the good way. You said to me that I have to use xrLoacteViews to have the headset position. However, the function write in an XrViewState object which doesn’t contain pose information.
What can I do ?

Thanks a lot for answers !

If you use Local or Stage reference space, feet level will be at 0 (y = 0 if we’re talking OpenXR coordinate system).

xrLocateViews takes 6 parameters. XrViewState will only hold information about what it was possible to read (position/orientation etc). The actual transforms are stored in what you provide as the last one parameter. It should be a set/array/vector of XrView.

I’m sorry, I have no idea how Unreal handles that. But from what you wrote, it seems that it is some generic handle and maybe it is possible to change it by providing the actual interaction_profile. It should be also possible to replace it with your own custom mesh. The ray most likely is something completely separate. And you may have to implement it on your own.

Hmm, so I know that this was discussed at one point during spec development, but I was pretty sure it’s no longer permitted by the spec. It looks like the CTS tests this for xrLocateSpace (but maybe not xrLocateViews?): OpenXR-CTS/src/conformance/conformance_test/test_xrLocateSpace.cpp at devel · KhronosGroup/OpenXR-CTS · GitHub There it’s looking at two spaces both created from view space, so if anything allowed passing 0 it would be this, and it definitely does not. So whatever runtime lets you pass 0 in to xrLocateViews most likely has a bug.