Would be nice to have the ability to have a higher precision depth format as in some situations 24bits of precision is not enough causing a lot of Z fighting, especially in Pcsx2 emulation where it has 32bits of precision.
32bit&64bit would solve issue, appreciated
Nvidia devs told me the Api needs to expose that feature the driver already supports double float.
Also would be nice to have in Qualcomm drivers
would also be useful in OpenGL btw, thanks
Do they support such a thing?
Also, wouldn’t it make since to just support 32-bit UNORM depth formats? At least as an option for hardware that provides them?
Lastly, are you unable to make use of the extra bits from a 32-bit float format to at least mitigate these issues?
UNORM is fine, as long as we have 32bits of precision
Matter of fact scratch that …We need stencil
we already use 32bit float it only has 23bits of precision
it is integer , but would be happy with 64bit float. Also we’re not using the oversized depth range extension, since that would massively complicate matters.
and our data doesn’t come in as float to begin with
the problem is that the stepping between values when converted between 0-1 is not precise enough so the jumps are too large, causing a lot of Z fighting which wouldn’t otherwise be there on the console (or our software renderer which uses doubles)
so the range when between 0->1 is restricted by the size of the mantissa, which is 23 bits.
So yeah we need 64bit Effective Aka double float for the extra precision required
But as said needs to be available to Nvidia drivers as stated by Nvidia devs, which currently isnt
Bout Qualcomm…gonna have to ask devs
So after some more thinking it’s likely that new features in vulkan won’t really help, we will likely need to do it with ROV and get the barycentric weights etc and calculate the depth interpolation ourselves
the problem was this will be very slow for us, was kinda hoping increasing the precision in that restricted range so to pass a translated UINT32 value through could help alleviate some of our depth precision problems, since the PS2 is integer, and it interpolates internally (not 100% sure on the precision or how there, but it’s better than we’re getting from single 32bit float)
another possible use for doubles could maybe be older (polaris era) AMD GPU’s, or Vega, where they don’t have ROV
just to be clear, this is unsigned 32bit, not signed.