GLSL position reconstruction

This code works perfectly with normal rendering:

vec3 ScreenCoordToWorldPosition(in vec3 position)
{
	vec4 coord = InverseCameraProjectionViewMatrix * vec4(position.xy * 2.0f - 1.0f, position.z * 2.0f - 1.0f, 1.0f);
	return coord.xyz / coord.w;
}

Typical usage looks like this. In this case, I am getting a cubemap coordinate for a skybox lookup:

vec3 screencoord;
screencoord.xy = gl_FragCoord.xy / BufferSize;
screencoord.z = 1.0f;
vec3 fragposition = ScreenCoordToWorldPosition(screencoord);

However, when the camera projection matrix is a projection matrix from OpenVR the function produces bad results. This is just the inverse of the same projection camera matrix being used to calculate vertex positions. I don’t see why it wouldn’t work when a conventional projection matrix does.

Any ideas why this is happening and how to fix it?

The (VIEWING*PROJECTION) takes you to CLIP-SPACE. So the input to the inverse transform is CLIP-SPACE (not NDC-SPACE). That may have something to do with it.

For something else to try, check out PositionFromDepth_DarkPhoton() at the bottom of this post. Though there’s all kinds of ways to fry this fish:

Yeah, the Leadwerks way works well for Leadwerks perspective projection matrices, and avoided the use of an extra buffer for screen positions. I switched to the matrix multiplication specifically to support VR, because I don’t know if all the different headsets will use a conventional perspective matrix.

Using the linked-to GLSL function doesn’t require an extra buffer. Just feed it with:

float depth = ( 0.5 * gl_Position.z / gl_Position.w + 0.5 )

or similar, adapted to using frag shader inputs. Simplify to-taste.

Note that this assumes the default glDepthRange 0…1.