View Space Light Position Moving...

Hate to have to post about this, but every post I have found on this exact problem either the talk trailed off or the person never said how they fixed it.
Im doing deferred lighting in view space, constructing view space position from standard 24bit depth (/stencil) buffer texture. Im setting the model view matrix to just a viewing matrix, reading it back, and multiplying my lights world direction and position with it before passing it to the shader.
All the lighting looks correct (when not moving, every is correct, relative to walls floor, each other, etc), but when the camera moves, towards the edge of the screen, you can see the lights position moves in the opposite direction the camera moves, its pretty much 1:1. If I strafe left, the lights go right, if I go down, they go up. If I rotate the view, they seem to rotate in the opposite direction.
Im pretty sure everything is in the correct space or the spot lights wouldnt be pointed correctly, and the point lights wouldnt be where I want them to be (when not moving), theres just this side effect when I move, and I cant figure where else I should be looking.
Ive been pulling my hair out trying to figure what is causing it any help would be appreciated.

when the camera moves, towards the edge of the screen, you can see the lights position moves in the opposite direction the camera moves, its pretty much 1:1. If I strafe left, the lights go right, if I go down, they go up. If I rotate the view, they seem to rotate in the opposite direction.

There’s something wrong when you dot(, light_spot_dir). Are you sure they are in the correct space especially light_spot_dir?

I just did a test and made a mistake of not making sure all the vectors needed were in view space. I easily made the mistake because I usually do my stuff in world space.

When I made the mistake it was because I did not make make sure that the light_spot_dir was in view space as well. So because of that I ended up with the same problem you are having.

Sounds like ravage has got you covered.

What might help (rather than just looking at whacky lighting results and trying to infer from that) is to render a point/sphere/vector/cone/etc. at the position of the light sources. That’ll tell you if your world-space positions (and directions, if you render a vector or a cone for the direction too) for them are correct. If not, look backwards (in how you compute those positions/directions). If so, look forwards (i.e. not transforming position and direction to eye-space properly)/.

It seems the spaces are correct, I transform the point light position as a point using the 4x4 view matrix (set with look at and read right back) so it gets translated correctly, same with the spot position, and for its direction i just rotate, each of those vector3s is in world corrdinates before applying the view matrix on the cpu. Im drawing a sphere for each position and an extra axis for the spot orientation axis, they sit where they should be.
Last night I added an extra render target to the gbuffer for debug and just dumped the position into it using, varying vec3 position = gl_ModelViewMatrix * gl_Vertex, I then used this texture to get the positions for the spot light shader only, and skipped reconstructing the position from the depth, and now only the point light moves with the camera. So it seems to be something with the position reconstruction from the depth.
I had been doing it the leadwerks way of doing it from a while back, Iv read other ways of doing it but a matrix multiply per fullscreen pixel wasnt very appealing.

vec3 PositionFromDepth(in float depth)
	vec3 vsPos;
	vsPos.z = (near / (far - (depth * FminN))) * far;
	vsPos.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
	vsPos.x *= vsPos.z;
	vsPos.y = (((-gl_FragCoord.y * heightInv) + 0.5) * 2.0) * aspectInv;
	vsPos.y *= -vsPos.z;
	vsPos.z = -vsPos.z;

return vsPos;

Im using a texture rect for depth ( vec3 pixelPosition = PositionFromDepth(texture2DRect(us_Texture0, gl_FragCoord.xy).x) ) Im not sure if that has something to do with it, but im pretty sure the problem is something with the reconstruction at this point, because if I work around this code within the spot shader, it stops moving (direction and position remain correct, according to the debug sphere axis drawing), but the point still moves using this, both in the same scene at same time.

This doesn’t look right.

Let’s pick this apart.

First, if “depth” is in-fact a 0…1 WINDOW-space depth value, I can convince myself that vsPos.z will end up being an EYE-space depth value, assuming a perspective projection, assuming w_eye = 1 (which it typically is), and assuming the depth was rendered with glDepthRange 0…1. In general: z_eye / w_eye = fn / [ z_win * (f-n) - f ], and that’s what this apparently computes.

However, the computation of the X and Y EYE-space values (vsPos.x and vsPos.y) looks really funny. First of all, this line:

vsPos.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;

does appear to compute the NDC-space X value (x_ndc) from a WINDOW-space X value (assuming widthInv is 1/width where width is the 3rd parameter to glViewport). However, the corresponding NDC-space Y value (y_ndc) computation:

vsPos.y = (((-gl_FragCoord.y * heightInv) + 0.5) * 2.0) * aspectInv;

there’s this inversion that I don’t think should be there. This would flip up and down. WINDOW-space runs bottom-to-top, and NDC-space runs bottom-to-top. So I don’t believe there’s any case for inversion here. And I’m not buying the aspectInv term. So I think this first Y line should be:

vsPos.y = (((gl_FragCoord.y * heightInv) - 0.5) * 2.0);

Then the next couple of lines in the X and Y transforms allegedly take NDC-space X & Y to EYE-space X & Y:

vsPos.x *= vsPos.z;
vsPos.y *= -vsPos.z;

At this point, vsPos.z is the negative of EYE-space Z (i.e. -z_eye), so it is positive.

The only way I can make the X computation fly is if I assume a 90-degree symmetric perspective FOV frustum (r=-l, and r/n = 1). Then it works. But surely we don’t want to hard-code a 90-degree FOV symmetric frustum assumption in the position reconstruction logic (??)

Now as for vsPos.y, there’s another odd negation again… That counteracts the negation in the computation of the NDC-space Y value on the previous line. So what we really have here (without that needless confusion) is:

	vsPos.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
	vsPos.x *= vsPos.z;
	vsPos.y = (((gl_FragCoord.y * heightInv) - 0.5) * 2.0) * aspectInv;
	vsPos.y *= vsPos.z;

So now they’re consistent (except for the aspectInv term), and I now buy the computation of NDC-space Y (first Y vsPos.y line). But we have the same problem with the 2nd Y line as with X. And as I mentioned, we have this confusing aspectInv term.

So I’m not seeing correct EYE-space X and Y computations here at all, unless we assume a symmetric perspective frustum with X and Y FOVs of 90 degrees, and unless we assume aspectInv = 1.0. And surely we’re not intentionally hard-coding those assumptions…

So, crunching some formulas quickly, I “think” what you want to reconstruct the EYE-space position is:

vec3 PositionFromDepth_DarkPhoton(in float depth)
  vec2 ndc;             // Reconstructed NDC-space position
  vec3 eye;             // Reconstructed EYE-space position

  eye.z = near * far / ((depth * (far - near)) - far);

  ndc.x = ((gl_FragCoord.x * widthInv) - 0.5) * 2.0;
  ndc.y = ((gl_FragCoord.y * heightInv) - 0.5) * 2.0;

  eye.x = ( (-ndc.x * eye.z) * (right-left)/(2*near)
            - eye.z * (right+left)/(2*near) );
  eye.y = ( (-ndc.y * eye.z) * (top-bottom)/(2*near)
            - eye.z * (top+bottom)/(2*near) );

  return eye;

which you could simplify a bit by factoring out -eye.z/(2*near).

And of course, if you assume a “symmetric” perspective frustum (but not necessarily one that is 90 deg FOV), the eye.x/.y lines simplify down to:

  eye.x = (-ndc.x * eye.z) * right/near;
  eye.y = (-ndc.y * eye.z) * top/near;

This is pretty close to what Leadwerks had, except that we’re missing those right/near and top/near terms, and there’s that unexplained aspectInv term in his code…

That point light wont budge, thanks a lot Dark Photon, that was more helpful then you will ever know. Now I just have to make sure this is done for each light.
For anyone that might come across this post, I can confirm this fixed the problem perfectly.
Again, thanks to both of you for your help and suggestions.

Great! Glad we got you fixed up.

I am having a similar problem but my code is very different to the one in this thread. Basically I am using shaders to do my rendering and I have a couple of objects positioned in the scene and they are both lit correctly. I also render a point sprite so I know where the light is positioned. I can move the light around using the arrow keys and the light changes as it should. If I translate the viewpoint either left or right or forwards or backwards then the light position changes but the point sprite of course doesn’t move. How do I keep my light from moving with the camera?? Here is some code which draws the 3 objects I mentioned.

void DrawLight(void)
	M_Translation.Translate(light_pos_model.x, light_pos_model.y, light_pos_model.z);	// Position of the point sprite object
//	M_Rotation.Rotate(rotation, 0.0f, 1.0f, 0.0f);										// Orbit the point object around Y axis
//	M_Rotation_Light.Rotate(-rotation, 0.0f, 1.0f, 0.0f);		// WORKS BUT ONLY IF THE VIEW DOESN'T CHANGE

	Lighting();											// Perform the matrix operations

	// Load the vertex data
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, triangle);
	glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, triangle_colour);

//	glDisable(GL_CULL_FACE);

	// Render the object
	glDrawArrays(GL_POINTS, 0, 1);
//	glDrawArrays(GL_TRIANGLES, 0, 3);

//	glEnable(GL_CULL_FACE);

void DrawBox(void)
	M_Modelview.LoadIdentity();							// Reset the ModelView matrix
	M_ModelviewProj.LoadIdentity();						// Reset the ModelViewProjecion matrix

	M_Translation.Translate(-5.0f, 0.0f, -15.0f);		// Translate 15 units into the screen and 5 units to the left
	M_Rotation.BuildMatrixFromQuaternion(quat);			// Build the rotation matrix        
	M_Rotation.LoadIdentity();							// TEMP

	Transform();										// Perform the matrix transformations
	box_test.DrawLWO();									// Draw the box object

void DrawCar(void)
	M_Modelview.LoadIdentity();							// Reset the ModelView matrix
	M_ModelviewProj.LoadIdentity();						// Reset the ModelViewProjecion matrix
	M_Translation.Translate(6.0f, 0.0f, -15.0f);		// Translate 15 units into the screen and 6 units to the right
	M_Rotation.BuildMatrixFromQuaternion(quat);			// Build our rotation matrix from the quaternion 'quat'
	M_Rotation.LoadIdentity();							// TEMP

	Transform();										// Perform the matrix transformations
	car_test.DrawLWO();									// Draw the car object

void Draw3D(void)

void Transform(void)
	M_Modeling = M_Translation * M_Rotation;			// OPPOSITE ORDER IS COMPUTED: ROTATION -> TRANSLATION
	M_Modelview = M_Viewing * M_Modeling;				// Combine the Modeling Matrix and the Viewing Matrix (Model -> View)
	M_Modelview.GetMatrix(mv_Matrix);					// Save the Modelview matrix in mv_Matrix

	M_ModelviewProj = M_Projection * M_Modelview;		// MODELVIEW MUST BE COMPUTED FIRST! MODELVIEW -> PROJECTION
	M_ModelviewProj.GetMatrix(mvp_Matrix);				// Save the final Modelview Projection matrix in mvp_Matrix
	glUseProgram(lightProgram);									// START THE SHADER HERE!
	glUniformMatrix4fv(mv_Location, 1, GL_FALSE, mv_Matrix);	// Must be called AFTER glUseProgram (For conversion to eye-space)
	glUniformMatrix4fv(mvp_Location, 1, GL_FALSE, mvp_Matrix);	// Must be called AFTER glUseProgram (For conveersion to clip-space)
//	glUniformMatrix4fv(inv_Location, 1, GL_FALSE, inv_Matrix);	// Must be called AFTER glUseProgram (For conveersion to clip-space)
	glUniform3f(light_Location, light_pos[0], light_pos[1], light_pos[2]);	// TEST!!

void Lighting(void)
	M_LightMatrix = M_Rotation * M_Translation;			// Calculate the light matrix for model-space transformation
	M_Modelview = M_Viewing * M_LightMatrix;			// This calculation transforms the point sprite object into eye-space

	M_ModelviewProj = M_Projection * M_Modelview;		// MODELVIEW MUST BE COMPUTED FIRST! MODELVIEW -> PROJECTION
	M_ModelviewProj.GetMatrix(mvp_Matrix);				// Save the final Modelview Projection matrix in mvp_Matrix

	/********* T E S T I N G ********/
	M_LightMatrix = M_Rotation_Light * M_Translation;

	light_pos_world = M_LightMatrix * light_pos_model;	// Convert the light position from object-space to world-space
	light_pos_eye = M_Viewing * light_pos_world;		// Convert the light position from world-space to eye-space

//	light_pos[0] = light_pos_world.x;
//	light_pos[1] = light_pos_world.y;
//	light_pos[2] = light_pos_world.z;

	light_pos[0] = light_pos_eye.x;
	light_pos[1] = light_pos_eye.y;
	light_pos[2] = light_pos_eye.z;

	glEnableVertexAttribArray(0);		// Enable the vertex position attribute array
	glEnableVertexAttribArray(1);		// Enable the vertex colour attribute array
	glUseProgram(basicProgram);												// START THE SHADER HERE!
	glUniformMatrix4fv(basic_mvp_Handle, 1, GL_FALSE, mvp_Matrix);			// Must be called AFTER glUseProgram
	glUniform3f(light_Location, light_pos[0], light_pos[1], light_pos[2]);	// TEST!!

Here is the vertex shader to calculate the lighting…

#version 150

const float			c_zero = 0.0;
const float			c_one = 1.0;

uniform vec3		light_pos;			// Light Position in eye-space
uniform mat4		mv_Matrix;			// ModelView Matrix
uniform mat4		mvp_Matrix;			// ModelViewProjection Matrix
uniform mat4		inv_Matrix;			// Inverse ModelViewMatrix

in vec4				in_position;
in vec4				in_colour;
in vec3				in_normal;

out vec4			out_colour;

vec4 calculate_orbit_light(void)
	vec4 final_colour = vec4(c_zero, c_zero, c_zero, c_zero);
	mat4 inv_Matrix = inverse(mv_Matrix);
//	mat4 inv_Trans = transpose(inv_Matrix);		// Calculate the inverse-transpose modelview matrix
	// Transform the vertex from object-space to eye-space
	vec3 V = vec3(mv_Matrix * in_position);
	// Transform the normal's orientation into eye-space by the inverse-transpose modelview matrix
//	vec3 N = vec3(inv_Trans * vec4(in_normal, 0.0));
	vec3 N = vec3(mv_Matrix * vec4(in_normal, 0.0));
	normalize(N);										// <--- VERY VERY IMPORTANT!!
	// This will be used for attenuation
	float distance = length(light_pos * V);
	// Get a lighting direction vector from the light position to the vertex
	vec3 IL = vec3(inv_Matrix * vec4(light_pos, 0.0));
	vec3 L = normalize(IL - V);
	// Calculate the dot product of the light vector and vertex normal
	// If the normal & light vector are parallel then the surface polygon
	// will recieve maximum illumination!
	float diffuse = max(dot(N, L), 0.12);
	// Attenuate the light based on distance
//	diffuse = diffuse * (1.0 / (1.0 + (0.75 * distance * distance)));
	// Finally we multiply the colour by the illumination level
	// It will be interpolated across the triangle
	final_colour = in_colour * diffuse;

	return final_colour;

void main()
	// This line converts the vertex from object-space to clip-space
	gl_Position = mvp_Matrix * in_position;
	// Send the new vertex colour to the fragment shader
	out_colour = calculate_orbit_light();

Do I need to pass the view position to the vertex shader?? I should also point out that the code originally was written to orbit the light position around the 3D objects but the problem with that was that the light was moving in the opposite direction to what it should. Hence the temporary ‘-rotation’ fix in the DrawLight() function.

Also the camera view position is updated every frame in the main render function:

// This is the main drawing function which is called every frame...
void Render(CLOCK *time)
	// First we need to clear the screen and the depth buffer...

	// Reset the Modelview Matrix (start with a nice clean slate)


//	Draw2D(time);

Hi. Not getting much luck here am I. Maybe I should have started a new thread. Could a mod move the above post to a new thread please. I read the rules about not posting in old threads. Sorry about that.