# What is store in depth texture ?

I try to understand the process of reconstructing view space vertex position (after tranforms by camera) from depth texture fro use in deferred shading.

Actually I have implement the process using the method that involved interpolation of view vector but don’t fully understand it ,and it doesn’t work with bounding-box light pass (may be it actually work but my implement is wrong).

The most simplistic way I read from many article state that I just unproject the vertex (x/w,y/w,z/w,1.0) with inverse projection matrix.

x/w and y/w can be retrieved easily by nonperspective interpolate gl_Position.xy/gl_Position.w ,I already use it to calculate screen space texture cooridinate (please correct me if I am wrong)

but how can I calculate z/w does it already store in depth texture or I have to calculate it somehow ?

To convert the (z/w+1)/2 value to z, :
http://www.geeks3d.com/20091216/geexlab-how-to-visualize-the-depth-buffer-in-glsl/

(P.S. I’m not really sure about the maths yet, … I’m still doing the matrix-inverse mul that you mention)

Thank you , now I successful re-calculate view space vertex position using the above method (Sorry , I just noticed that you already post the depth to v-position code at your old post).

The unproject method is 100 time more easy to understand than the old method I use (the view-vector interpolation).

The performance are almost the same too , the unproject method are a little slower (93 fps vs 89 fps) may be due to matrix multiplication every pixel.