Can we retrieve super sampling ratio to be able to reason about AA effects?

XrViewConfigurationView has “recommended” and “max” resolution but doesn’t give a baseline so we can know what it’s relative to. I know that the distortion filter makes this a fuzzy thing, but we can still reason about the center of the display reasonably.

There doesn’t seem to be any way. I wonder how that information would best be delivered, since lens distortion can vary a lot between headset designs.

On the topic of lens distortion, it would be cool to be able to do the correction on the application side in OpenXR. I think most legacy runtimes allow doing it already, although I’ve never tried using it. It would make it possible to do some niche approaches like implementing the lens correction in the vertex shader, instead of having to do a postprocess pass for it.

This is unlikely to happen, especially as reprojection improves. It may have made some sense when a (compute) shader pass for distortion correction was a big burden and timewarp/reprojection wasn’t a thing, but as you need the pass to do any kind of timewarp/reprojection (which are essential for head-mounted experiences of high quality), I doubt runtimes will hand this back to the app, it just doesn’t make sense and offers one more way to make an app not work well.

1 Like

I have my own no-cost AA effect that’s 3D but it has to know what is a good setting based on the pixel size since it does its magic in the vertex shader. Yesterday I implemented fixed-foveated-rendering and found I eliminate all sign of pixels by using this technique in the full super-sampling mode (that looks like 3x) by scaling it to the FBO height divided by 3000 without much fine tuning. I also changed the effects pass to sample between pixels, which isn’t necessary but helps to add more smoothing to the image’s “signal”… that can make it look lumpy, but washes out with the distortion stuff, or just isn’t visible. I’m pretty psyched to have a picture that looks real in terms of no square pixels visible anywhere, and all straight lines on polygon edges. I don’t know if that’s standard in PCVR games or not, but it’s more than I was expecting from VR in 2022. (Edited: technically my set/card are not exactly brand new!) Now it’s mainly god rays and dim screens that are the only thing holding the experience back for me.

There doesn’t seem to be any way. I wonder how that information would best be delivered, since lens distortion can vary a lot between headset designs.

The WMR Portal app says the peripherals native resolution. I suppose it could easily be queried. I’ll probably make this setting adjustable so users can fine tune it themselves like pupil distance on some displays. It’s not exactly full FOV but 6000 pixels tall seems like plenty to get within a totally realistic image except maybe not as crisp as you might like it.

1 Like

I’ve had at least 2 topics (not mine) go off track on this subject. What Ryan says is probably true, but it does seem like it would be good to be able to inject some post-processing code into the shader that does all of this (or rather pre-processing from that shader’s POV) to avoid yet another copy! There’s at least 1 too many copies in the current chain. Which probably doesn’t help when the buffers are so big as 6000x6000 per eye, etc.

Sorry, I actually read your link just now… I thought it was about doing the distortion step with a mesh as opposed to UV math in the pixel shaders, but it’s about avoiding the copy altogether I see. I wonder if it could be done with the hardware screen space tessellation functionality because it’s very complicated to tessellate geometry like this otherwise. I wouldn’t have guessed mobile has that functionality but I don’t know. At first I thought it was talking about doing deferred rendering when it got into lighting as I skimmed it. That was an interesting read from a different perspective.

I was kind of surprised to see meshes being used to do the distortion step, but I think that’s how it’s usually done. It seems like it wouldn’t be a win if you ask me. Ryan’s group was doing that, where I first heard of it.

I don’t know if “reprojection” is really critical or not… it may be, but in theory if the system was running smoothly and predictably it shouldn’t be needed at all. And if you had a stable environment (which Windows is not) then pushing the performance requirements down would only increase stability. I would be interested in this because my own AA technique that is no-cost and doesn’t require image-based AA would work even better in this system, since its results wouldn’t have to go through the distortion step, then it should look just as good as on a regular monitor. I think I’ll stash a link to this article in my source code in case I ever want to be more ambitious! (Edited: Something tells me dynamic tessellation would have serious pop-in problems, and I don’t like any pop-in.)

1 Like

It t would be interesting to see how bad the distortion is without tessellation. Apart from not having to do extra postprocess passes, the most intriguing part it that it would enable pixel mapped rendering on the app level.

This is more of a curiosity though now, like Ryan pointed out, it wouldn’t work with reprojection. I’m not not an expert, but guessing it would be hard to get enough control of any full consumer OS to get by without reprojection even if you got it running stably, especially across different runtimes and VR hardware.

The human head can turn very fast, so minimizing the perceived latency is very important to prevent the picture from lagging. Asynchronous reprojection feels like a huge difference to me, and from what I’ve read, it’s a make or break difference for many people.

1 Like

For the record, I fine-tuned my 3000 value to 3100, and then I had the “genius” idea to break into the code and see what the buffer size was at “100%” (default supersampling) and it was 3092. So I just happened to be off by 8 pixels, but I wasn’t trying for more than within 100 pixels anyway. Still, that reveals that 100% is about 1.43148 times the native resolution.

I think it would be good to expose this information. It shouldn’t be a total black box. The maximum resolution is very close to double this value. So it comes out that dividing by 3100 doubles the vertex shader based AA technique’s footprint, just like it needs for 2x super sampling. 3092*2 is 6184 but for some reason its resolution 6188, but very close. Anyway, this is a case study in why OpenXR shouldn’t obscure this information.

Best place for a feature request, such as “some indication of what resolution to use to get approx 1:1 pixel mapping in the fovea”, is on github OpenXR-Docs

OpenXR has proven annoying in this regard. Even to query the frame rate is an extension, and Microsoft’s runtime barely implements any extensions. So you can’t count on any extensions if you count on Microsoft’s runtime as the “lowest common denominator” meaning might as well not bother with extensions.

xrWaitFrame is part of the base spec and it returns a predictedDisplayPeriod corresponding to the refresh rate.

https://registry.khronos.org/OpenXR/specs/1.0/man/html/XrFrameState.html

The extension you are mentioning is for devices that can dynamically change the refresh rate, and therefore makes no sense to be implemented on platforms where this feature is not supported by the hardware.

Looking at the Khronos spreadsheet of extensions, the WMR runtime implements 27 extensions, which is the most of any PCVR runtimes. We implement the extensions that make sense for our platform and for our customers. If you have specific requests, I’m happy to hear them, but we’re not going to support XR_FB_display_refresh_rate since it’s very clearly not needed on our platform that does not have dynamic refresh rate.

Thanks.

Well, on Windows I’ve never been able to hit a steady frame rate based on timing mechanisms. You have to assume the refresh rate and hope that every frame hits the vertical blank to be smooth.

Sorry I couldn’t post this this morning… Github was offline.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.