So I finally got OpenXR setup, using code from the official samples.
I’m confused however, how I render something. I placed a box inside renderlayer, but it draws a box twice and it moves with my headset. Then I tried to use the view.pose to set the camera pos and orientation, but that did not work well either. Now I can’t see the text box I’m rendering.
Is it correct to set the pose as the camera view matrix for each view?
After setting the scissors I do this for each view.
If I don’t do this, I see two red boxes. If I do it, I see nothing
The code you posted doesn’t have much meaning without context. There is no direct camera concept built into OpenXR. Are you using an existing game- or rendering engine?
Generally with XR rendering, you need to transform vertex coordinates from their local space to clip space in a shader, using one or more matrices going between these spaces:
Local space → World space → Room space (the LOCAL or STAGE reference space in OpenXR) → View space (per eye) → Clip space.
With OpenXR, you get the views and FOV data for each eye from xrLocateViews. The view data represents a transform from room space to view space, and the FOV data is used to construct a projection matrix that transforms view space into clip space. Your code looks like it is building a correct matrix at least, but without context it’s impossible to know how it gets used.
Does your renderer have the same axis convention as OpenXR? OpenXR is using +Y up, if you e.g. have +Z up you’ll have to further rotate the orientations you’re getting from the views in order to match your world.
What kind of reference space are you using? Ensure it is XR_REFERENCE_SPACE_TYPE_LOCAL or XR_REFERENCE_SPACE_TYPE_STAGE and not XR_REFERENCE_SPACE_TYPE_VIEW.
You’re mentioning nothing about your projection matrix and this could be responsible for all sorts of problems and misalignments. I’d suggest you use the code from the OpenXR tutorial (it contains helper functions for all the math needed), verify it’s working and then adjust to your own codebase and data. Be careful and read the comments in these functions because depending on what’s your rendering api (e.g. Vulkan vs OpenGL) you may need to make some subtle yet crucial changes in them.
So yeah. Thank you both. I got so far as to see a correct rendered cube, but the cube moves up and down with the headset (works looking left and right)
There is no engine, just a test for now, but I plan to integrate into my own C++ engine. I think the API wrapper stuff is confusing in the samples, but will have another look. This almost works: