Question about reconstructing World Position from depth buffer


I’m trying to reconstruct the world position of my geometry in my deferred shader like this:

  vec2 texcoord = vec2(gl_FragCoord.x / screenres_x, gl_FragCoord.y / screenres_y);
  float d = texture(uniform_depth, texcoord).r; 
  vec4 pos = vec4(texcoord.xy * 2 - 1, d * 2 - 1, 1) * inverse_projectionmatrix;
  pos /= pos.w; // get normalized device coordinates (ndc)

Now this seems so give me the position in view space, right? But how do I transform this position back into geometry’s original world space?

Any Ideas?

Best regards

I’ll take your word for it that that gives you EYE-SPACE (sometimes called view space or camera space); haven’t checked your math. The PositonFromDepth_DarkPhoton() function listed in this post definitely does (for a perspective projection).

From there, getting back to WORLD-SPACE is simple. Since VIEWING transform is the WORLD-to-EYE space transform, invert that to get a transform that’ll take you from EYE-to-WORLD space transform.

I took a closer look at your function PositonFromDepth_DarkPhoton(). It seems that You do not apply a transposed inverse projection matrix. Isn’t this neccessary? Instead you’re using right/left/bottom/top variables. What are they?

I think you mean inverse projection matrix. Inverse transpose would be for transforming normals.

No it’s not necessary. Think about the classic application of matrix math to linear algebra. Consider what you use the inverse matrix for. Now consider that if you know the forms of the equation (i.e. you know the equations you’re solving) you can analytically solve for the solution without resorting to inverse matrix computation in terms of the input parameters for the projection. Which brings us to…

For the meaning of left/right/bottom/top, see the glFrustum() man page. Yes, this solution is specifically formulated for a perspective projection.

Oh yeah, q quick look at the actual glFrustum() implementation gave me a clue. Thank m8