I have existing fragment position reconstruction code but it assumes an orthogonal matrix. I’d like to get position reconstruction working with any arbitrary projection matrix. (The variable inverseprojectionmatrix is the inverse of [the projection matrix multiplied by the camera object’s matrix].)
//Construct gl_FragCoord equivalent
fragpos.xy = gl_FragCoord.xy / BufferSize * 2.0f - 1.0f;// convert to -1, 1 range
fragpos.z = texelFetch(depthTexture, ivec2(gl_FragCoord.xy), 0).r;// get the depth at this coord
fragpos.w = 1.0f;
//Convert fragcoord to gl_Position equivalent
vec4 glposition = fragpos;
glposition.z = glposition.z * 2.0f - glposition.w;// opposite of gl_Position.z = (gl_Position.z + gl_Position.w) * 0.5f;
//Convert gl_Position equivalent to vertex position
vec3 worldspaceposition = (inverseprojectionmatrix * glposition).xyz;
//Visualize the result
outColor.rgb = worldspaceposition / 10.0f;
You should redivide your w back to 1, as always I think.
Can you walk me throught the good image. Does not quite make sense to me. In your code it looks like you route xyz to rgb. But in that case why is there large uniform green patch (high y?) in the back? Why is there cyan block (high y and z)? Why things close to camera are red (high x)?