Quest 3 Depth API with XR_META_environment_depth

I’ve been following the OpenXR tutorial and I want to implement Depth API to obtain the data from the depth sensor of the Quest 3.

I’ve followed the guide from Meta provided from their native mobile-depth documentation and read the documentation from OpenXR about XrEnvironmentDepthSwapchainCreateInfoMETA

Everything works ok, no errors, and all the checks return successful, however I don’t know how to access the depth data as textures, as the OpenXR documentation says:

Depth is provided as textures in the same format as described in the XR_KHR_composition_layer_depth extension.

xrAcquireEnvironmentDepthImageMETA returns 0 (successful) every frame, and I can see that data from XrEnvironmentDepthImageMETA is correct (views show fov and pose correctly, and swapchainindex updates every frame) but given the index and the swapchain, I don’t know how to access the depth data as textures.

I’m assuming it’s done in the same way as with regular swapchains and xrEnumerateSwapchainImages, except you use xrEnumerateEnvironmentDepthSwapchainImagesMETA to get references to the textures inside the swapchain. Then you use the swapchainIndex returned from xrAcquireEnvironmentDepthImageMETA to select which of the textures to use.

1 Like

xrAcquireEnvironmentDepthImageMETA indeed works correctly, as I can see that every frame environmentDepthImage.swapchainIndex changes in a cycle of integers mod 4.

But how does xrEnumerateEnvironmentDepthSwapchainImagesMETA give me the reference to the images? Is it in the last parameter images? If that’s the case, how would I access the bitmaps? Like this *(images + swapchainIndex)?

And if that’s the case, and according to documentation images parameter is: ‘images is a pointer to an array of graphics API-specific XrSwapchainImage structures’, if for example I’m using Vulkan, would accessing the array return a VkImage in each position?

With Vulkan, you have to create and pass an array of XrSwapchainImageVulkanKHR structures, which the function fills out with the data. You then just access it by indexing into the array.

The third member in the struct is the actual VkImage, which you would have to pass to vkCreateImageView to get something you can use in a shader.

1 Like

I’m able to get the VkImage with that method, however upon further testing, wouldn’t it be easier to use something like m_graphicsAPI->GetSwapchainImage(reinterpret_cast<XrSwapchain>(depthSwapchainMeta), xrEnvironmentDepthImageMETA.swapchainIndex); ? It returns a VkImage as well.

However, I still have my initial problem as I need to access the image pixel values like the OpenXR documentation says

Depth is provided as textures

I have created the VkImageView with m_graphicsAPI->CreateImageView, what should be the next steps to access the bitmap?

m_graphicsAPI->GetSwapchainImage() is just a wrapper function that returns textures that have been previously queried with xrEnumerateSwapchainImages(). You are probably best of making a copy of it along the code that populates it, and adapting that to work with the meta depth swapchain functions.

Do you want to access the pixels on the GPU or the CPU? A VKImage is a handle to the pixel data in GPU memory. To access it from the CPU, you would have to copy it to the system memory first. I’m not very experienced with Vulkan, so I can’t help with that.

1 Like