Is it possible for composition layers to take world's depth into account?

I want to render some UI into composition layer quads, but I also want the user to be able to have a laser pointer, however from my understanding composition layer will overwrite everything that was drawn by the engine in the previous projection layer. What is the best practice to avoid that?

I want to use composition layers to avoid blurry text due to reprojection problems, but I also want some objects (at least laser pointer) to be able to occlude it.

There is no cross-vendor way to do that.

You can do it on Oculus via XR_FB_composition_layer_depth_test and on Varjo via XR_VARJO_composition_layer_depth_test. On runtime implementations supporting neither of them, it will revert to painter’s algorithm.

Sad! I really want to use them for my UI, but I’d love to avoid writing hardware-specifics code. Will think how to overcome this, thank you!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.