Problems setting camera and objects up for rendering

So I finally got OpenXR setup, using code from the official samples.
I’m confused however, how I render something. I placed a box inside renderlayer, but it draws a box twice and it moves with my headset. Then I tried to use the view.pose to set the camera pos and orientation, but that did not work well either. Now I can’t see the text box I’m rendering.
Is it correct to set the pose as the camera view matrix for each view?

After setting the scissors I do this for each view.
If I don’t do this, I see two red boxes. If I do it, I see nothing :wink:

 glm::vec3 camView = glm::vec3(view.pose.position.x, view.pose.position.y, view.pose.position.z);
 glm::quat camRot(view.pose.orientation.w, view.pose.orientation.x, view.pose.orientation.y, view.pose.orientation.z);

 // Combine view and rot to matrix
 glm::mat4 camViewMat = glm::mat4_cast(camRot) * glm::translate(glm::mat4(1.0f), camView);

 renderer->camera.setView(camViewMat);
 renderer->render();

Thanks in advance
Odin

The code you posted doesn’t have much meaning without context. There is no direct camera concept built into OpenXR. Are you using an existing game- or rendering engine?

Generally with XR rendering, you need to transform vertex coordinates from their local space to clip space in a shader, using one or more matrices going between these spaces:
Local space → World space → Room space (the LOCAL or STAGE reference space in OpenXR) → View space (per eye) → Clip space.

With OpenXR, you get the views and FOV data for each eye from xrLocateViews. The view data represents a transform from room space to view space, and the FOV data is used to construct a projection matrix that transforms view space into clip space. Your code looks like it is building a correct matrix at least, but without context it’s impossible to know how it gets used.

Some things that come to mind:

  1. Does your renderer have the same axis convention as OpenXR? OpenXR is using +Y up, if you e.g. have +Z up you’ll have to further rotate the orientations you’re getting from the views in order to match your world.

  2. What kind of reference space are you using? Ensure it is XR_REFERENCE_SPACE_TYPE_LOCAL or XR_REFERENCE_SPACE_TYPE_STAGE and not XR_REFERENCE_SPACE_TYPE_VIEW.

  3. You’re mentioning nothing about your projection matrix and this could be responsible for all sorts of problems and misalignments. I’d suggest you use the code from the OpenXR tutorial (it contains helper functions for all the math needed), verify it’s working and then adjust to your own codebase and data. Be careful and read the comments in these functions because depending on what’s your rendering api (e.g. Vulkan vs OpenGL) you may need to make some subtle yet crucial changes in them.

So yeah. Thank you both. I got so far as to see a correct rendered cube, but the cube moves up and down with the headset (works looking left and right)

There is no engine, just a test for now, but I plan to integrate into my own C++ engine. I think the API wrapper stuff is confusing in the samples, but will have another look. This almost works:

float angleWidth = tan(view.fov.angleRight) - tan(view.fov.angleLeft);
float angleHeight = tan(view.fov.angleDown) - tan(view.fov.angleUp);
float farDistance = 100.0f;
float nearDistance = 0.05f;

glm::mat4 projectionMatrix(0.0f);

projectionMatrix[0][0] = 2.0f / angleWidth;
projectionMatrix[2][0] = (tan(view.fov.angleRight) + tan(view.fov.angleLeft)) / angleWidth;
projectionMatrix[1][1] = 2.0f / angleHeight;
projectionMatrix[2][1] = (tan(view.fov.angleUp) + tan(view.fov.angleDown)) / angleHeight;
projectionMatrix[2][2] = -farDistance / (farDistance - nearDistance);
projectionMatrix[3][2] = -(farDistance * nearDistance) / (farDistance - nearDistance);
projectionMatrix[2][3] = -1.0f;

glm::mat4 viewMatrix = glm::inverse(
    glm::translate(glm::mat4(1.0f), glm::vec3(view.pose.position.x, view.pose.position.y, view.pose.position.z))
    * glm::mat4_cast(glm::quat(view.pose.orientation.w, view.pose.orientation.x, view.pose.orientation.y, view.pose.orientation.z))
);

renderer->camera.SetProjection(projectionMatrix);
renderer->camera.setView(viewMatrix);

renderer->render();

and just a simple test vertex shader (GLSL)

#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;

out vec3 ourColor;

uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;

void main()
{
gl_Position = projection * view * model * vec4(aPos, 1.0);
ourColor = aColor;
}

I use XR_REFERENCE_SPACE_TYPE_LOCAL

Nvm. Found an easier way. Copied sample code and converted to GLM :wink:
Works like a charm :smiley:

XrMatrix4x4f proj;
XrMatrix4x4f viewProj;
XrMatrix4x4f_CreateProjectionFov(&proj, GRAPHICS_OPENGL, view.fov, 0.05f, 100.0f);
XrMatrix4x4f toView;
XrVector3f scale1m{ 1.0f, 1.0f, 1.0f };
XrMatrix4x4f_CreateTranslationRotationScale(&toView, &view.pose.position, &view.pose.orientation, &scale1m);
XrMatrix4x4f viewm;
XrMatrix4x4f_InvertRigidBody(&viewm, &toView);

renderer->camera.SetProjection(ConvertXrMatrixToGlm(proj));
renderer->camera.setView(ConvertXrMatrixToGlm(viewm));

I wrote this:

glm::mat4 ConvertXrMatrixToGlm(const XrMatrix4x4f& xrMatrix)
{
glm::mat4 glmMatrix;

glmMatrix[0][0] = xrMatrix.m[0];
glmMatrix[0][1] = xrMatrix.m[1];
glmMatrix[0][2] = xrMatrix.m[2];
glmMatrix[0][3] = xrMatrix.m[3];

glmMatrix[1][0] = xrMatrix.m[4];
glmMatrix[1][1] = xrMatrix.m[5];
glmMatrix[1][2] = xrMatrix.m[6];
glmMatrix[1][3] = xrMatrix.m[7];

glmMatrix[2][0] = xrMatrix.m[8];
glmMatrix[2][1] = xrMatrix.m[9];
glmMatrix[2][2] = xrMatrix.m[10];
glmMatrix[2][3] = xrMatrix.m[11];

glmMatrix[3][0] = xrMatrix.m[12];
glmMatrix[3][1] = xrMatrix.m[13];
glmMatrix[3][2] = xrMatrix.m[14];
glmMatrix[3][3] = xrMatrix.m[15];

return glmMatrix;

}