Converting a depth texture to world space

I need to evaluate the data in my z-buffer texture in world space. Since the depth buffer values undergo the perspective calculation I need to reverse this process.
What I would like to do in a fragment shader is to associate depth values with OpenGL units in world space.

If somebody has done something similar or knows how to do this, I would be very interested.

  1. put your depthbuffer in texture
  2. dl mesa src and take look at gluUnproject function
  3. write shader similar to gluUnproject func. Inputs are known (modelview, projection and viewport put as uniforms, x,y from gl_FragCoord and z from texture)


Thanks yooyo, looking into the mesa source is a great idea.

Tell me please,how i can put depth buffer in texture :confused:

Read the thread “depth buffer as float texture” in the opengl advanced forum.
If you don’t need more that 8 bits precision, it’s fairly straight forward.

Would eye space be good enough? Here’s how you can take a depth buffer value to eye space (arb_fp; not glsl I’m afraid).

first pass this to program local 0:
far_clip - near_clip, -far_clip, far_clip * near_clip, 0.0f

then do this in the frag program:
PARAM depth_to_eyez = program.local[0];
TEMP depthtex, eyez;

get depth texture value, which is NDCz remapped to {0, 1}

TEX depthtex.x, fragment.position, texture[1], RECT;

convert depthtex value to eyez;

MAD eyez.x, depthtex.x, depth_to_eyez.x, depth_to_eyez.y;
RCP eyez.x, eyez.x;
MUL eyez.x, eyez.x, depth_to_eyez.z;

I don’t get it.
Eye space does still contain the perspective calculation, right!?
So, your code should only change the range of the depth data but not linearlyze the range…
I get negative values when using this code with clipping at near=0.5 and far=10.5.

I am not sure if this can help me, but it sure looks more efficient than where I am going at the moment…

Can you explain what the code should do?

I believe the calculation that makes your depth values non-linear is in the projection matrix. So anything transformed to eye-space should still be linear just like the values you get from this method. The negative values you see are correct because in OpenGL the +z axis points toward you.

I derived this method after reading a bunch of literature on the web and solving some equations. I was amazed I could still do that much math :slight_smile: Unfortunately, I don’t know where my sources are anymore. Here’s one source: But there are a lot more out there if you google for them.

Basically, you need to find the equation that converts linear eye z to normalized device coordinate z, then invert the equation, and then turn it into shader code.

The result should be pretty accurate up close to you and very inaccurate close to the far clip plane. Once you convert depth to NDC, a lot of precision is lost forever.

Yes, it looks like I will need to use the Perspective Matrix to get back to world space.
Then the GluUnproject code would be the only way to go. And I though I would get around matrix calculations… :frowning:

… or you can create rgba32f pbuffer and store pixels xyz position in rgb. In second pass, bind rgba32f pbuffer as texture and read rgb as xyz.


:smiley: I got it! :smiley:
mogumbo had the right idea: nearfar/(far-openGLdepthValue(far-near))
unmaps the z values to linear clipping data, thats what I was looking for. I now have z data which is linearly related to the pixel-to-eye distance.
I had the unproject method working, when I realized that world space wasn’t what I needed…

Thanks to everybody for your help!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.