Implementing movement in hello_xr

I’m trying to implement movement using thumbstick input in the hello_xr app provided by Khronos. Now, I can read the thumbstick value and pass it to the rendering step to adjust the position of the view. However, this does not move the View reference space, it just moves the view itself. Thus, Anything that is supposed to be headset-locked is not.

How can I move the View reference space? Or is my approach wrong altogether?

The view reference space represents the headset on user’s head. It’s attached to the user and shouldn’t be “moved” typically.

The “movement” you were looking for, I assume, is to move the virtual content in the world, so that the user who is wearing the headset and standing still should perceive that they are moved in the world. Assuming you are using a “LOCAL” reference space to place the virtual content in the world, you can achieve this “movement” by destroying the old LOCAL space and creating a new LOCAL reference space with an offset, so that the world is moved relative to the user, and therefore, the user is moved in the virtual world, even if the user didn’t move in the physical world. During this, the VIEW reference space is still unchanged to represent the headset on user’s head.

I see, I did guess that my approach was wrong.

Destroying the LOCAL ref. space and recreating it sounds like a solution. However, is this the common/right way to do it? Would destroying and recreating the space every single frame introduce performance overhead?

I don’t think it should cause much overhead. However, I’d imagine that you probably would internally keep a transform between “local”/“stage” and the “world” - your “locomotion transform” so to speak