How to recenter world based on user's current pose?

I’d like to be able to center the user’s view based on their current pose. My current solution is to interpolate the left and right eye view orientations by .5 to try and get what the “center eye” orientation would be and then I rotate the world by the resulting quaternion. This seems to produce something close to correct but I don’t think it’s quite right. How should this be done?

Hi @Cecil
what’s the discrepancy?
view-orientation … would that be build from the forward vector of a ‘user-object’-pose? If so, you might think of this orientation as a camera-rotation. The difference between eye-space and camera-space is conseptually, that you think of the camera with a cone infront … the eye-cone is the one that emanates from the ‘camera-lense’ toward the eye-sensors behind the lense. So, if you build rotation from pose-orientation, the unite-vectors should be negated. The translation should be negated too … and their order of concatenation too, compared to the order you typically use for building camera-matrix [rotate first, translate second].
… so much for my current understanding …
another current thread on an eqvivalent type of problem

I call xrLocateViews and use the view poses from that. Are you sure it would be negated? I was thinking if the world is transformed the same as the view orientation that you are effectively canceling it out and “centering”.

I didn’t consider the possible differences between opengl & openxr … I’m an alien conserning openxr.
I’ve tested the erection of a view-matrix from a camera/viewer-transform and compared it with the glm::LookAt(taking eye/camera-position, lookAtPoint, up_vector as params). My description provides a conformant matrix.
It’s a confusing problem to deal with and you should take advice from someone better skilled.

Here matrix-math looks like a consensus-sport:
opengl handedness

I went back to the book. An invers is calculated algebraically by a lot of complexity. Author later talks of undoing the action of a transform instead of inverse, undoing in terms of rotating and translating in uppersit directions. It makes sense if you think of what’s in the projection. I think of it rooted at 0,0 and axis-directed aka where you send your look-at points through the view-matrix.
Not confused yet?

The “view orientation” you get from xrLocateView on left/right eye might not point to forward on all different variation of HMDs. The “VIEW” reference space is designed for this. You can consider to use XR_REFERENCE_SPACE_TYPE_VIEW

Thank you I will give that a try.

from the opengl corner:
The view-matrix would make sense as one for every eye.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.