# Compute eye space coord from window space coord?

Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?

Is there a way to compute the eye space coordinate from a clip space coordinate and depth value in GLSL (gluUnproject in GLSL, so to speak)? How please?

Do you really mean clip-space and not window-space? Because the transform from eye-space to clip-space is just a matrix (the perspective matrix). Therefore, the transform back would be a transformation by the inverse of that matrix.

Window-space is more complex and requires that you provide the shader with the viewport transform.

Yeah, window (screen) space. what I want is to compute the eye space coordinate of a pixel in the frame buffer and that pixel’s depth value.

Thx. Is there a viewport transformation matrix? Where do I retrieve it?

Oh, sorry. You did ask about the full eye-space position of the pixel, not just its Z coordinate. Here:

See the routine at the bottom of that post. There are all sorts of ways to skin this cat.

…and on that note, here are a few related posts you might find interesting that describe exactly that:

Thx. Is there a viewport transformation matrix? Where do I retrieve it?

That routine I pointed you to presumes glViewport( 0, 0, width, height) – where widthInv = 1/width and heightInv = 1/height.

Ok, thank you, understood all that after a while of pondering on the code.

One thing that doesn’t work well for me is your EyeZ formula. It works better for me this way:

#define EyeZ(_z) (zFar / (zFar - zNear)) / ((zFar / zNear) - (_z))

@DarkPhoton,

I’ve been wondering for some time to remove from my deferred render G-Buffer the 32-bit eye space XYZ position vector and replace it with maths to reconstruct Zeye from the depth texture instead. However, I have never found a suitable post containing everything I need to do this and when I have attempted it the results were wrong.

What I’d like to do is converth from depth texture Z to NDC Z (along with constructing NDC x and y). Then convert from NDC to EYE space.

I note from the reference you gave here

that you have calculated Zeye from the depth texture and projection matrix. However, I ran through your algebra and whilst I’m no wizard at it, I did spot that your Zeye comes out wrong (at the point when you converted from -Zndc to Zndc)
You end up with

`````` float z_eye = gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);
``````

…but I ended up with

``````
float z_eye = -gl_ProjectionMatrix[3].z/ ( (z_viewport * -2.0) + 1.0 - gl_ProjectionMatrix[2].z);
``````

my working out of each term, step by step:

``````
z_ndc = z_clip / w_clip
z_ndc = [ z_eye*gl_ProjectionMatrix[2].z + gl_ProjectionMatrix[3].z ] / -z_eye
z_ndc = [ z_eye*gl_ProjectionMatrix[2].z] / -z_eye     + gl_ProjectionMatrix[3].z / -z_eye;	//separating out the terms
z_ndc = -gl_ProjectionMatrix[2].z      +    gl_ProjectionMatrix[3].z / -z_eye;			//cancelling out z_eye
z_ndc + gl_ProjectionMatrix[2].z       =    gl_ProjectionMatrix[3].z / -z_eye;			//re arranging
(z_ndc + gl_ProjectionMatrix[2].z) * -z_eye    =    gl_ProjectionMatrix[3].z;			//re arranging z_eye
-z_eye =    gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z)			//re arranging z_eye to LHS
z_eye =  -1 * [  gl_ProjectionMatrix[3].z / (z_ndc + gl_ProjectionMatrix[2].z) ]		//removing -ve term from z_eye
z_eye =  -gl_ProjectionMatrix[3].z / (-z_ndc - gl_ProjectionMatrix[2].z) 			//removing -ve term from z_eye
float z_eye = -gl_ProjectionMatrix[3].z/((-z_viewport * 2.0) + 1.0 - gl_ProjectionMatrix[2].z);	//subsitute Z_ndc = z_viewport * 2.0 + 1.0

``````

So it seems quite easy to obtain NDC space position:

``````
ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;
z_ndc = (z_viewport * 2.0) - 1.0;		//z_viewport is the depth texture sample value (0..1) range

``````

and the conversion to EYE space

``````
z_eye = -gl_ProjectionMatrix[3].z/(z_viewport * -2.0 + 1.0 - gl_ProjectionMatrix[2].z);

``````

but the X and Y EYE space conversions trouble me because I can’t figure out what RIGHT and TOP are. (I assume near is near clip value, typically 0.5 for example when used with gluPerspective)

``````
eye.x = (-ndc.x * eye.z) * right/near;
eye.y = (-ndc.y * eye.z) * top/near;

``````

Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead? I’d rather not have to supply a uniform to pass in those two values and it seems a shame to do so when everything else can be calculated from the depth texture, projectionmatrix and viewport dimensions.

I am a step further, but shadow maps still do not work quite right for me.

gl_TextureMatrix [2] contains light projection * light modelview * inverse (camera modelview). With that and the following shader code:

``````
uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
#define ZNEAR 1.0
#define ZFAR 5000.0
#define ZRANGE (ZFAR - ZNEAR)
#define EyeZ(screenZ) (ZFAR / ((screenZ) * ZRANGE - ZFAR))
void main()
{
float colorDepth = texture2D (sceneDepth, gl_TexCoord [0]).r;
vec4 ndc;
ndc.z = EyeZ (colorDepth);
ndc.xy = vec2 ((gl_TexCoord [0].xy - vec2 (0.5, 0.5)) * 2.0 * -ndc.z);
ndc.w = 1.0;
vec4 ls = gl_TextureMatrix [2] * ndc;
float light = 0.25 + ((colorDepth < shadowDepth + 0.0005) ? 0.75 : 0.0);
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

``````

The shadowmap projection doesn’t work right. Depending on camera orientation, the shadow moves around a bit. When the camera moves close to the floor, the shadow depth values get larger until the shadow disappears. What also happens is that the shadow is projected on faces behind the light (i.e. in reverse direction). What’s the reason of that all?

What I also had expected that I would have to apply the inverse of the camera’s projection matrix to ndc (ndc are projected coordinates, right? If so, I thought I’d have to unproject, untranslate, unrotate from the camera view, then rotate, translate and project in the light view to access the proper shadow map value). When I however unproject ndc with the inverse camera projection, shadow mapping doesn’t work at all anymore.

Images:

Camera pointing forward (btw, where’s that shadow artifact coming from?):

Camera pointed up a bit (same position) -> shadow looks different:

The args of glFrustum you’d otherwise pass it defining your view frustum.

(I assume near is near clip value, typically 0.5 for example when used with gluPerspective)

Right, also the args of glFrustum. Specifically, they are the negatives of eye-space Z.

Also, is there a way to remove right/near and top/near and use a value picked from the Projection matrix instead?

Probably. Give it a go.

Don’t forget that you can just interpolate a view vector across your surface and use that to reconstruct the position too.

No clues for me? :-/

Plug some numbers and you’ll see that yours isn’t correct. For instance, near=1, far=9, z_viewport = 0. You should get z_eye = -near = -1. With yours you don’t. You get +1 instead.

If you look at the latest shader code I have posted, you will see that I am using your formula. (Btw, I had also noted the sign error with my formula, and fixed that).

I also tried the formula you are quoting here, but it doesn’t change a thing:

``````
uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform mat4 projection;
uniform vec2 screenScale; // 1.0 / window width, 1.0 / window height
void main()
{
float colorDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec4 eye;
eye.z = -projection [3].z / (colorDepth * -2.0 + 1.0 - projection [2].z);
eye.xy = vec2 ((gl_FragCoord.xy * screenScale - vec2 (0.5, 0.5)) * -2.0 * eye.z);
eye.w = 1.0;
vec4 ls = gl_TextureMatrix [2] * eye;
float light = 0.25 + ((colorDepth < shadowDepth) ? 0.75 : 0.0);
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

``````

I have the impression that it has something to do with the projection matrix stuff. The projection matrix isn’t viewer dependent, and I am always using one and the same projection (it gets even set every frame). I just can’t seem to determine what is causing my problems, so I am asking for clues here.

One thing I am not sure about is what the projection does. After the eye coordinate has been constructed, is it identical to what the modelview transformation + projection would produce, or to what only the modelview transformation would produce? In other words: Do I need to unproject the reconstructed eye coordinate (by multiplying with the inverse of the projection matrix), or not? I think I need to, but I have the suspicion that the eye coordinate reconstruction contains a few steps of unprojecting it.

I had also been looking at an explanation of the various transformation steps OpenGL does from transforming over projecting to computing screen coordinates for a vertex, and tried to reverse that, but either I have misunderstood something, or the document I have read contains errors.

Based purely on reading your question (not your code), it is the latter (eye coordinates have only the modelview matrix applied).

See “Figure 3-2: Stages of Vertex Transformation” in this online version of an old “Red Book” for the sequence of transformations in the fixed function pipeline. The shaders are more flexible, but the basic concepts still apply.

I actually didn’t put my question quite right. It should have been “One thing I am not sure about is what the reconstruction does.” (not “projection”). Since I was referring to the reconstruction (not your fault for missing it, since I didn’t put it right), the code might need to be examined. Btw, do I need to divide ls by ls.w after having computed it (gl_TextureMatrix * eye)?

Background is that I tried to do the final steps of computing window coordinates after applying modelview and projection myself to access the shadow map and see what happens:

So what I did was:

• reconstruct eye from window coord
• apply inverse camera modelview
• apply light modelview
• apply light projection
• compute window xy [xy = xy / (-2.0 * z) + vec2 (0.5, 0.5)] (basically reversing the eye construction step from above)
• access shadow map with that xy.

So the question aimed at whether applying the projection already computes window xy (from all I know and have read, it does not).

Those manually computed window coords didn’t work though. Using texture2DProj doesn’t work quite right, too.

I checked that I computed the matrix inverse properly to make sure the problem wasn’t rooted there.

Ok, with some help by a friendly guy from StackOverflow.com I finally got this (principally) to work. Here’s the vertex shader code for projecting a shadow map into a frame buffer as a post process:

``````
uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;
void main()
{
float colorDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec4 screenPos = (vec4 (gl_TexCoord [0].xy, colorDepth, 1.0) -
vec4 (0.5, 0.5, 0.5, 0.5)
) * 2.0;
vec4 eyePos = projectionInverse * screenPos;
eyePos /= eyePos.w;
vec4 lightPos = gl_TextureMatrix [2] * eyePos;
float light = 0.25 + ((colorDepth < shadowDepth + 0.0005) ? 0.75 : 0.0);
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

``````

A few problems remain, like the shadow disappearing depending on viewer position, and some artifacts.

What does texture2DProj return if the projected coordinate is outside the window space? Do I need to take care of that myself?

What would be the shader code to fully emulate/replace texture2DProj (by texture2D with the properly computed window space coordinates)?

A few problems remain, like the shadow disappearing depending on viewer position, and some artifacts.

That’s not surprising since that code doesn’t work. I’ll go step by step:

``````
vec4 screenPos = (vec4 (gl_TexCoord [0].xy, colorDepth, 1.0) -
vec4 (0.5, 0.5, 0.5, 0.5)
) * 2.0;

``````

So, we’re in a fragment shader, and gl_TexCoord[0].xy represents the a 0-1 value. The zero value should be the lower-left of the screen, and the 1 value should be the top right. “colorDepth” is the value from the depth buffer. I have no idea why you call it colorDepth, since it has nothing to do with colors, but that’s not the issue.

Given those two things, this equation will compute the Normalized Device Coordinate space position. It should not be called “screenPos;” this value has nothing to do with the screen at this point.

FYI: screen space is the space relative to your monitor. Screen space positions change when you move a window around. Nothing in OpenGL uses screen space, and you’d be hard pressed to actually compute screen space coordinates unless you use windows-system dependent code.

Next:

``````vec4 eyePos = projectionInverse * screenPos;
``````

And this is where your code loses the plot, so to speak.

This (or this for non-Firefox users) is a representation of the transform from eye-space (there, called “camera space”) to window space. Notice that the first step is the multiplication of the eye-space position by the projection matrix.

This means that if you want to invert this operation, then this must be the last step you do. You must first convert your normalized device coordinate (NDC) position (which is what “screenPos” has) into clip-space before you can multiply it by the inverse projection matrix. And that’s where you have your real problem.

The page I linked you to shows how to use gl_FragCoord to reverse-transform gl_FragCoord into eye-space. But gl_FragCoord has 4 components: X, Y, Z and W. The X, Y and Z are the window-space position. The W is the reciprocal of the clip-space W value. Why?

Because the difference between NDC space and clip-space is that W value. You need that W value in order to transform from NDC space to clip-space.

And your problem is that you don’t have it. You have XYZ in NDC space, but without the original clip-space W value, you can’t do anything.

Now, you could take the easy way out and just store it somewhere. But that’s no fun. The correct answer is to compute it based on your projection matrix and the NDC space Z value.

To do this, you need to unravel your projection matrix. I don’t know what your projection matrix is, but let’s assume it is the standard glFrustum/gluPerspective matrix.

The projection matrix computes the clip-space Z (Zclip) by applying this (or this) equation to it. The clip-space W is just the negation of the eye-space Z (Zeye).

And we know that the NDC-space Z (Zndc) is just the Zclip/-Zeye. Well, we need to find Zeye (which is the clip-space W we need), and we have Zndc. One equation, two unknowns.

However, thanks to the above equation, we can express Zclip entirely in terms of Zeye. So if you substitute the equation for Zclip in there, then we can solve for Zeye. You don’t even really need the equation per-se; just pick the values from the (non-inverted) projection matrix. The matrix just stores coefficients. Solve for Zeye, and you’re done.

Once you have Zeye, you know that Wclip is -Zeye. And now that you have Wclip, you can convert the NDC position to clip-space by multiplying it by Wclip. Once there, you can transform the clip-space position through your inverse projection matrix to produce the eye-space position you need.

The last code I have posted does work (at least to some extent): As long as the shadow is visible, it stays at the right spot. That’s more than I had before. The shadow might disappear because of the W problem you have pointed out, but I am not quite convinced that this is the problem. I admit I am not exactly a 3D math wizard though (or I might not have needed to ask for help here).

The artifacts may be caused by accessing the shadow map with coordinates outside of it (i.e. outside of the light’s screen space). I guess the artifacts are a result of the texture being created with GL_CLAMP. This conclusion is the result of the observation that they only appear when the shadow reaches the confines of the light’s frustum.

I called it “colorDepth” because it’s the depth value corresponding to the frame’s color buffer, but thank you for taking your time to point out to me that this was a stupid idea.

Thank you also for explaining me how idiotically misleading the term “screen space” is when I should have used “window space” (btw, doesn’t this thread’s title clearly say that)?

I can’t say how glad I am that the GLSL compiler understood my intentions despite all these blatant faults of mine.

So this is the relevant code?

``````
vec3 CalcCameraSpacePosition()
{
vec3 ndcPos;
ndcPos.xy = ((gl_FragCoord.xy / windowSize.xy) * 2.0) - 1.0;
ndcPos.z = (2.0 * gl_FragCoord.z - depthRange.x - depthRange.y) / (depthRange.y - depthRange.x);

vec4 clipPos;
clipPos.w = 1.0f / gl_FragCoord.w;
clipPos.xyz = ndcPos.xyz * clipPos.w;

return vec3(clipToCameraMatrix * clipPos);
}

``````

Since I am simply rendering a fullscreen quad, my “gl_TexCoord [0]” should be the same as your “gl_FragCoord.xy / windowSize.xy”.

I don’t have gl_FragCoord.z or gl_FragCoord.w. gl_FragCoord.z would by my “windowZ” (from the depth buffer)?

I can solve the equation, but wouldn’t know how to do this, since I don’t know which values I’d actually have to pick:

And isn’t v.xyzw == vec4 (v.xyz / v.w, 1.0) for all purposes of transformation and projection?

Unless I completely screwed up, here’s the equation’s solution:

A = ZNear + ZFar
B = ZNear - ZFar
C = 2 * ZNear * ZFar

clip_w = -eye_z
clip_z = (eye_z * A) / B + C / B = (eye_z * A + C) / B

ndc_z = clip_z / clip_w

=> ndc_z = ((eye_z * A + C) / B) / -eye_z
=> -eye_z * ndc_z = (eye_z * A + C) / B
=> -eye_z * ndc_z * B = eye_z * A + C

D = ndc_z * B

=> -eye_z * D = eye_z * A + C
=> 0 = eye_z * A + eye_z * D + C
=> 0 = eye_z * (A + D) + C
=> -C = eye_z * (A + D)
=> eye_z = -C / (A + D)
=> eye_z = -2 * ZNear * ZFar / (ZNear + ZFar + ndc_z * (ZNear - ZFar))

Here’s the shader (I hope the variable names are to your taste):

uniform sampler2D sceneColor;
uniform sampler2D sceneDepth;
uniform mat4 projectionInverse;

#define ZNEAR 1.0
#define ZFAR 5000.0
#define A (ZNEAR + ZFAR)
#define B (ZNEAR - ZFAR)
#define C (2.0 * ZNEAR * ZFAR)
#define D (ndcPos.z * B)
#define ZEYE (-C / (A + D))

void main()
{
float fragDepth = texture2D (sceneDepth, gl_TexCoord [0].xy).r;
vec3 ndcPos = vec3 (2.0 * gl_TexCoord [0].xy - 1.0, (2.0 * fragDepth - ZNEAR - ZFAR) / (ZFAR - ZNEAR));
vec4 clipPos;
clipPos.w = -ZEYE;
clipPos.xyz = ndcPos * clipPos.w;
vec4 eyePos = projectionInverse * clipPos;
vec4 lightPos = gl_TextureMatrix [2] * eyePos;
float light = 0.25 + ((fragDepth < shadowDepth) ? 0.75 : 0.0);
gl_FragColor = vec4 (texture2D (sceneColor, gl_TexCoord [0].xy).rgb * light, 1.0);
}

So this is the relevant code?

I’m not sure how you got that code; I said that you had computed the NDC space position correctly (even though the variable that held it was misnamed).

It’s the clip-space computation (the part where you did the division) that you got wrong.

I can solve the equation, but wouldn’t know how to do this, since I don’t know which values I’d actually have to pick:

It’s vector/matrix multiplication; it’s just shorthand for a linear system of equations. Do the multiplication by hand and see which values affect the Zclip output. Then pick those values out of the projection matrix.

And isn’t v.xyzw == vec4 (v.xyz / v.w, 1.0) for all purposes of transformation and projection?

What’s “v”? If “v” is the clip-space position, you’re correct. The problem is that you don’t have the clip-space position yet.

And the inverse of that operation is

v.xyzw = vec4(v.xyz * v.w, v.w);

You need v.w to perform the inverse operation. And you will note that, in the operation as you stated it, v.w is lost.

I hope the variable names are to your taste

It’s not about being “to [my] taste”; it’s about being accurate and self-documenting. It’s more for your convenience than anything. I can’t count the number of times that correct variable names have helped me figure out what some 6-month-old code was doing, or that bad variable names have obfuscated the intent of code.

I got that code from one of the pages you had linked to.

What I wanted to say is that your comments about the variable names I have chosen were completely inappropriate and uncalled for. I understood the names well, and from the very simple code they were used in it was quite clear what they were.

You aren’t seriously proposing that I am gonna start to manually do matrix computations to find out which coefficients to use, do you?

Please stop playing the teacher here, because that is how you are coming across.