Oculus runtime ignores Projection Layer View's pose

Greetings,

I’m currently implementing OpenXR into Chaos Vantage and I have come across a bug.

Oculus runtime seems to be ignoring the XrCompositionProjectionLayerView’s pose. Even if it’s static, the projection continues to be in front of the HMD.

I can see how this usually shouldn’t be a problem, but since we are developing a real-time ray-tracer we want to implement some resource-saving solutions, e.g. mono view (both eyes see the same) - this is done on our side by having the same position for both eyes and using the same render result.

I am wondering if this is a problem on your side, or on Meta?

Edit: This is not the case with a SteamVR runtime.

Thanks,
Alexander Kostov, Chaos LTD.

I don’t have an answer for the Oculus question, and yes the poses used in the composition layer should be used for spatial reprojection into the perspective of the eyes where the compositor should use the most recently available poses.

I am however really curious to understand your approach. There’s only so much that the spatial reprojection can do and creating good stereoscopy from 1 single image is not really one of them I believe(?)

Think about this case, if you have an object close to the camera, the amount of pixels occluded by the object is big (the closer the object to the camera, the more pixels are occluded behind it). When the compositor will reproject those images to match the true perspective for each eye, it can’t just guess what’s behind the object.

If you are rendering the ducky right in front of the head with a stereo camera, each view will have the ducky “a little bit to the left/right of each other” in order to account for perspective convergence (since the focal point is not the ducky but a point behind it instead).

If you use a single camera and later reproject, there is no way for the reprojection to reconstruct the pixels behind the ducky (drawn in black below).

I believe a simple rotational reprojection will simply distort the ducky (like stretch it out), and a depth-based reprojection method with reconstruction will just guess from adjacent pixels (which typically will create “jelly” looking effect with motion).

I’m really curious how you can solve this problem.

I didn’t clarify what I meant by “mono view” - I mean monoscopic.

The reason for this is that some scenes are extremely heavy and rendering them in both eyes would mean critically-low FPS. The monoscopic view in a headset results in just flat image, which doesn’t boggle my brain at all. (atleast from my research)

I’ve implemented a stereoscopic solution that does render from both eyes, as it is intended.

I guess I’ve forced some mental gymnastics in vain, mbucchia1, sorry for that! haha :slight_smile:

Yeah, it seems that there is a problem with Oculus runtime, it completely ignores the pose AND FOV of the Composition projection layer view.

From the Khronos docs: " However, applications may submit an [XrCompositionLayerProjectionView] which has a different view or FOV than that from [xrLocateViews]. In this case, the runtime will map the view and FOV to the system display appropriately. In the case that two submitted views within a single layer overlap, they must be composited in view array order."

I’m going to be thrilled if this gets fixed, because it seems that the performance hit is waaaaaaay lower when using oculus runtime, instead of SteamVR.

You can only change the FOV if the runtime reports mutable FOV. I’d be very surprised if it weren’t following the pose, because without knowing the original pose, it can’t reproject/timewarp anything…

The mention of SteamVR suggests you’re on the desktop PC runtime, right?

Yes, I am using desktop PC runtime. Is there a way to check if something is mutable in OpenXR?

Can you further explain this, because I’m still learning OpenXR and maybe I’m not understanding correctly - “I’d be very surprised if it weren’t following the pose, because without knowing the original pose, it can’t reproject/timewarp anything”?

Mutable: see XrViewConfigurationProperties::fovMutable here: The OpenXR™ Specification

Need the pose: The way that time warp/reprojection works, is it computes a transform matrix from the difference in the head/eye poses used to render, and the updated best estimate of the head/eye pose when the image will actually appear. (because time passes between those two pose estimates being made, the latter is generally a much closer estimate.) So if it was totally ignoring the pose you supply, it couldn’t timewarp, because it would be missing half the input it needs, and it definitely does timewarp. But, there may be something about your particular use case that it’s not handling well, since presumably you’re not using the per-eye output of xrLocateViews to serve as the eye poses when you render, which is the normal pattern.

Is there a more adequate option for creating a monoscopic view, which doesn’t do any weird modifications like in my case?

Yes. the FOV is not mutable on Oculus.

For future reference, if anyone encounters this problem.
I solved it by using XrCompositionLayerQuad, instead of XrCompositionLayerProjection.
Create a reference space, which is in front of the HMD, and use its pose for layer.pose. This way you get a makeshift projection, which is pretty usable(if you ask me) for a monoscopic view.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.