I need a way to tell the hardware to use OpenGL’s clip depth mode [-1,1] instead of D3D’s [0,1], with the condition that it has to allow negative values: I can’t change it from the shader or from matrices.
In other words I’m looking for Vulkan’s glClipControl alternative.
Thanks, this answers my question, sadly it can’t solve my problem. Like I said I can’t change matrices since I have to emulate OpenGL’s NDC as close as possible, I’ve already tried FMA-ing the output z position in-shader and in some nit-picky cases the change in precision made depth tests fail.
I’m emulating Nvidia Maxwell GPUs (Nintendo Switch) and those GPUs can natively use both [-1,1] and [0,1] which means that inevitably some games use D3D’s depth space and others OpenGL’s depth space. I know using [-1,1] is not good, but in this case I don’t have control over it. I’ll try opening an issue requesting to expose this functionality as an extension.
Unless someone wants to continue the discussion, from my end, this thread can be closed.