Positon Lights Deferred Rendering

I am working on a very basic deferred renderer just now, trying to get the basics of it working.

I have a scene with one directional light and one point light, the drawing for the directional light seems to work OK, but the point light does not stay stationary within the scene, it moves around relative to the camera somewhat, though not at a constant relative position.

During the geometry stage I am saving the modelview matrix just after I have set the camera, and then use that matrix again in the lighting stage to get the lights position in viewspace. Is this correct?

the vertex code is:


uniform mat4 camModelViewMtx, camModelViewProjectionMtx;
uniform mat3 camNormalMtx;

void main()
{
	//#set the texture coordinate
	vryTexCoord = attrTexture;
	
	//#transform the vertex
	gl_Position = uniModelViewProjectionMtx * attrVertex;	
	
	//#calculate the light direction
	for(int i=0; i< MAX_LIGHTS; i++)
	{
		if(uniLight[i].initialised == true)
		{
			vryLightDir[i] = (camModelViewMtx * uniLight[i].position).xyz;
		}
	}	
	
	vryEyeVec = (uniModelViewMtx * -attrVertex);
}

where camModelViewMtx is the model view matrix of the camera from the geometry stage. (if I pass this in as an identity matrix the point light follows the camera as normal)
uniLight[i].position is passed in in world space.

The fragment shaders are:


void PointLightNew (const in Light light, in vec3 lightPos, in vec3 ecPosition3, in vec3 normal, inout vec4 diffuse)
{
	float nDotVP;      // normal . light direction
	float attenuation; // computed attenuation factor
	float d;           // distance from surface to light source
	vec3 VP;           // direction from surface to light position

	// Compute vector from surface to light position
	VP = vec3 (lightPos) - ecPosition3;
	// Compute distance between surface and light position
	d = length (VP);
	// Normalize the vector from surface to light position
	VP = normalize (VP);
	// Compute attenuation;
	//attenuation = 1.0 / (light.constantAttenuation + light.linearAttenuation * d + light.quadraticAttenuation * d * d);
	attenuation = 1.0 / (1.0 + 0.1 * d + 0.0 * d * d);

	nDotVP = max (0.0, dot (normal, VP));
	diffuse  += vec4(1.0,0.0,0.0,1.0) * nDotVP * attenuation;	

}
void main()
{
	vec4 ambient = vec4(0.0, 0.0, 0.0, 0.0);
	vec4 diffuse = vec4(0.0, 0.0, 0.0, 0.0);
	vec4 specular = vec4(0.0, 0.0, 0.0, 0.0);
	
	vec4 normalBuf = texture2D(tex1, vryTexCoord.st);	
	normalBuf.xyz -= 0.5;
	
	vec4 depth = texture2D(tex2, vryTexCoord.st);		

	vec3 screencoord; 
	screencoord = vec3(((gl_FragCoord.x/screenSize.x)-0.5) * 2.0,((-gl_FragCoord.y/screenSize.y)+0.5) * 2.0 / (screenSize.x/screenSize.y),DepthToZPosition( depth.x ));
	screencoord.x *= screencoord.z; screencoord.y *= -screencoord.z;
	
	
	for(int i=0; i< MAX_LIGHTS; i++)
	{
		if(uniLight[i].initialised == true)
		{
			if(uniLight[i].position.w == 0.0)
			{
				vec3 half_vector = normalize((vryLightDir[i] + normalize(vryEyeVec)));
				DirectionalLight(uniLight[i], normalize(normalBuf.xyz), normalize(vryLightDir[i]), half_vector, ambient, diffuse, specular);
			}
			else
			{
				PointLightNew(uniLight[i], vryLightDir[i], screencoord, normalize(normalBuf.xyz), diffuse);				
			}			
		}
	}	
	vec4 colour = texture2D(tex0, vryTexCoord.st) * diffuse;
	gl_FragColor = colour;
}

The code for getting back the position from the depth texture was made as described by leadwerks on this forum.

For simplicity now I am just computing the diffuse component. And doing both lights calculations for the whole screen.

working OK with directional:

and broken with points:

the above two points should be in the same position at (0,0,5.0) somewhere behind the character.

I have been doing a few more checks, I’m pretty sure that the result of vryLightDir[i] = (camModelViewMtx * uniLight[i].position).xyz; is the same for both my forward renderer and deferred one (the forward one does however show the spot light in a stationary position)

perhaps how i am recalculating the viewspace is incorrect.

uniform vec2 screenSize;
uniform vec2 cameraRange;

float DepthToZPosition(in float depth) 
{ 
	return cameraRange.x / (cameraRange.y - depth * (cameraRange.y - cameraRange.x)) * cameraRange.y; 
}

screencoord = vec3(	((gl_FragCoord.x/screenSize.x)-0.5) * 2.0,
			((-gl_FragCoord.y/screenSize.y)+0.5) * 2.0 / (screenSize.x/screenSize.y),
			DepthToZPosition( depth.x ));	
screencoord.x *= screencoord.z; 
screencoord.y *= -screencoord.z;

vec2 screencoord is passed in as (640,480) the resolution i am rendering to, and vec2 cameraRange is (1.0,1000.0) the near and far planes for the camera used to render the geometry. Are these the correct values?

I’m getting the “vec3 depth” value from a 24 bit depth component texture, the images of the various buffers are on the left side of the images.

Solved! :slight_smile:

The z component of screencoord (the reconstructed viewspace coordinate) was calculated in the range zNear to zFar but the rest of my code was assuming the z coordinates where in the range -zNear to -zFar.

Lesson learned :slight_smile: Thanks for your time.

Hey could I talk to you in private please? Could you supply me with an email address?
I have an offer for you