Foveated rendering on Quest with OpenGL

I tried to implement foveated rendering on Quest using XR_FB_foveation extension (+configuration +update swapchain). I setup swapchain to use foveated rendering, create swapchain, create foveation profile, update swapchain (optionally destroy foveation profile) and… It fails successfully :slight_smile: There are no error messages, no debug messages but it doesn’t work. The device learns that it should work with foveated rendering as the overlay tool shows a proper value of FOV but there are no visible artefacts, no performance gain.

I checked my code against a sample from Oculus (as I learned that on Quest you have to create action space for hands in particular order to make it work, otherwise it’s again: no errors, no debug messages, doesn’t work). But… It actually doesn’t work in the sample as well.

Has someone successfully implemented foveated rendering with OpenXR+OpenGL (and C++)?

1 Like

I have exactly the same problem :sob:

This sounds like a bug - is there more information about this somewhere that I can share with Oculus?

If you have a call xrCreateActionSpace to create hand space, it has to be called after the action set is attached to the session (xrAttachSessionActionSets). The OpenXR specification does not mention any particular order in which it should be called (7). At the same time, there’s no error code that would suggest it has to be called in a particular order (7.3.3). It’s really a minor issue but as there are no errors returned, one can start scratching their head trying to figure out, what’s going on. XR_ERROR_CALL_ORDER_INVALID would work fine, although it could be against the specification.

I think that Oculus already know about this - I’ve seen this issue mentioned somewhere else and someone from Oculus mentioning it will be fixed.

I haven’t seen anything about foveated rendering, though :frowning:

Hi ryanpavlik,
I use FFR successful when I create texturearray swapchain, use multi-pass mode and render left eye image to layer 0 and render right eye image to layer 1 respectively.It works well. But when I use multiview mode directly render two eyes, FFR not work.I want to know, if multiview and FFR can’t use together?If not, how to use multiview and FFR together. Looking forward to your reply!

fredyan, would you mind sharing how did you manage to make it work, please?

This is a demo oculus’s openxr demo,that you can refer to
https://developer.oculus.com/downloads/package/oculus-openxr-mobile-sdk
In this demo, you can see FFR work, although not obvious.And you can search this fun function “ovrRenderer_SetFoveation”.
In my application, I add “swapChainFoveationCreateInfo.type = XR_TYPE_SWAPCHAIN_CREATE_INFO_FOVEATION_FB” when I use Qual and opengl

Thank you. Can you please point me to the sample that works?

I checked XrCompositor_NativeActivity from XrSamples and when I run it, it doesn’t work. Not even talking about not seeing it, but when I use RenderDoc to check, what’s going under the hood, the texture that is used to render to, has only one thing set related to the foveated rendering, and that is min density. It is set to 1.0 - essentially disabling foveated rendering.

In the end, I use my own foveated rendering solution but I treat it more like a temporary workaround.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.