I have existing fragment position reconstruction code but it assumes an orthogonal matrix. I’d like to get position reconstruction working with any arbitrary projection matrix. (The variable inverseprojectionmatrix is the inverse of [the projection matrix multiplied by the camera object’s matrix].)
//Construct gl_FragCoord equivalent
vec4 fragpos;
fragpos.xy = gl_FragCoord.xy / BufferSize * 2.0f - 1.0f;// convert to -1, 1 range
fragpos.z = texelFetch(depthTexture, ivec2(gl_FragCoord.xy), 0).r;// get the depth at this coord
fragpos.w = 1.0f;
//Convert fragcoord to gl_Position equivalent
vec4 glposition = fragpos;
glposition.z = glposition.z * 2.0f - glposition.w;// opposite of gl_Position.z = (gl_Position.z + gl_Position.w) * 0.5f;
//Convert gl_Position equivalent to vertex position
vec3 worldspaceposition = (inverseprojectionmatrix * glposition).xyz;
//Visualize the result
outColor.rgb = worldspaceposition / 10.0f;
Here is the result with the code above. It’s close but something is off, and I can’t find any formula for how gl_FragCoord is calculated from gl_Position. Can you tell what is wrong with my code?
You should redivide your w back to 1, as always I think.
Can you walk me throught the good image. Does not quite make sense to me. In your code it looks like you route xyz to rgb. But in that case why is there large uniform green patch (high y?) in the back? Why is there cyan block (high y and z)? Why things close to camera are red (high x)?
The dragon is at position 0,0,0 and the camera is rotated 90 degrees to the left. Everything looks correct to me.
Since it’s reading a depth value there is no need for linear filtering. I’m guessing texelFetch could maybe be slightly faster? I don’t think it will be slower at least.