# Get Pixel Position for lighting and ssao from depth ?

Hello, I try to get pixel position from depth but I don’t get the idea behind it, I found some information on the internet on how to do, but they don’t explain clearly, at least for me, so I would like to get step by step with code and information about how to do this and how it works, I read on this topic that there are more that one way to do this, I would really appreciate your help. Well, this OpenGL wiki article explains the idea step-by-step. With code.

It would be easier if you told us what part of the process confused you.

Well… I get confused by so many types of positions: The clip-space position, The eye-space position, The normalized device coordinate space position, The window-space position, This positions confuse me a lot. I think I need just the positions with the model matrix transformations applied to them for lighting, is that correct ?

I calculate the finalPosition using:
vec4 screenSpacePosition = vec4(uvs.x * 2.0 - 1.0, uvs.y * 2.0 - 1.0, c * 2.0 - 1.0, 1);
vec4 worldSpacePosition = inverse(proj) * screenSpacePosition;
vec3 finalPosition = worldSpacePosition.xyz / worldSpacePosition.w;
where c is the linearized depth, Is that code correct or am I doing something wrong ?

I’m not sure how you got your code this wrong, when the Wiki article actually provides GLSL code to do what you want. Or at least, it takes you to camera/eye-space; it’s assumed that you can go to whatever your “final” lighting space is from there.

``````vec4 screenSpacePosition = vec4(uvs.x * 2.0 - 1.0, uvs.y * 2.0 - 1.0, c * 2.0 - 1.0, 1);
``````

I’m not sure what “uvs” is, but from the name, I’m guessing that this is some kind of normalized texture coordinate. If so, then the result of this computation is most assuredly not “screen-space”.

It’s NDC-space. Well, mostly.

Also:

where c is the linearized depth

How do you linearize your depth? Also, NDC space doesn’t use a linear depth, so you’d have to undo any such linearization before you could use this.

``````vec4 worldSpacePosition = inverse(proj) * screenSpacePosition;
vec3 finalPosition = worldSpacePosition.xyz / worldSpacePosition.w;
``````

Well, that’s most assuredly wrong, even accounting for the misnaming of “screenSpacePosition”. This is backwards.

The projection matrix is how you go from eye/camera space to clip-space. So the inverse projection would go from clip-space to eye/camera space. Similarly, the division-by-W is how you go from clip-space to NDC space. So undoing the division would be how you go backwards.

Which leads to the main problem the Wiki page discusses: how to undo the perspective divide without the old clip-space W. Which you don’t have.

[QUOTE=RobertBerg;1265836]I calculate the finalPosition using:
vec4 screenSpacePosition = vec4(uvs.x * 2.0 - 1.0, uvs.y * 2.0 - 1.0, c * 2.0 - 1.0, 1);
vec4 worldSpacePosition = inverse(proj) * screenSpacePosition;
vec3 finalPosition = worldSpacePosition.xyz / worldSpacePosition.w;
where c is the linearized depth, Is that code correct or am I doing something wrong ?[/QUOTE]
The variable names are misleading, but there’s nothing actually wrong with it, assuming that uvs are normalised texture coordinates and c is the value from a depth texture (with range 0…1).

If proj is just a projection matrix, then finalPosition is in eye space. If proj is the concatenation of a projection matrix and a model-view matrix, then finalPosition is in object space.