Deferred lighting

Hey guys.
I’ve been trying to implement the deferred lighting tehnique for over 2 weeks now and I’m at wits end with this.
Would anyone be so kind to lend me a source code of a simple deferred lighting example with glsl?
I found some snippets using CG (I think) but I have no ideea how that works even tough there are similarities.
I’d even pay someone to help me here. If I could get someone to help me, my engine would really progress a lot.
Thank you in advance.

Great conceptual overview and the highlights

The same story continues in Gems 3

As for the gory OpenGL details I’m sure you can scare up some demo code for setting up framebuffers and what have you, if that’s what you’re struggling with.

This is a good article, should get you started.

And this is also very helpful (I used the GLSL source from this to reconstruct my viewspace position from the depth buffer)

How much have you got done? Where are you stuck?

Well I’m stuck at the lighting phase.
Those were the 2 documents which I read but I’m not sure if I calculate depth well and anyway my supposed to be spot light works as directional. I was hoping to get some glsl shaders to study(and maybe a bit of a setup in c++ or other language) <- DX, but still usable

Do you have your own point lights working in normal forward rendering?

I found it helped when debugging the point lights in the lighting stage to ignore any real light calculations and just rendering anything that was within a certain radius of the light red (or any other colour that stands out) to make sure I was reconstructing the position correctly.

Good suggestion James. Thank you
And Ilian I did look over the tree tutorial but I didn’t notice there was the source (don’t know how I didn’t notice it). Anyway I took a peek at it and it looks pretty well. Tomorrow I’ll try to do something out of it. Anyway I noticed they use a position buffer and that doesn’t look like the depth buffer.

Hey guys.
I’m still in trouble
I’m really not sure how to proceed.
I have leadwerks and I can look inside the shader code but it works slightly diferrent than mine.
Should I use a renderbuffer for depth? Currently I use a RGBA texture and encode the depth in it.

Anyway here is my code:
Mesh Vert:

varying vec3 normal;
varying vec4 vpos;
void main()
	gl_TexCoord[0] = gl_MultiTexCoord0;
	normal = gl_Normal;
	vpos =gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex;
	gl_Position =gl_ProjectionMatrix*gl_ModelViewMatrix*gl_Vertex;

Mesh Frag:

varying vec3 normal;
varying vec4 vpos;
uniform sampler2D tex;

vec4 float_to_color(float f)
	vec4 color;
	color.x = floor(f);
	f = (f-color.x)*256.0;
	color.y = floor(f);
	color.z = f-color.y;
	color.x *= 1.0/256.0;
	color.y *= 1.0/256.0;
	return color;

void main()
	vec3 n = normal;
	vec4 c = gl_FrontMaterial.diffuse;
		c *= texture2D(tex,gl_TexCoord[0].st);
	gl_FragData[0] = c;
	gl_FragData[1].r = n.x;
	gl_FragData[1].g = n.y;
	gl_FragData[1].b = n.z;
	float d=(vpos.z+10000.0)/vpos.z;
	gl_FragData[2] = float_to_color(d);
	gl_FragData[3] = vec4(1.0, 1.0, 1.0,1);

LightPass vert:

uniform mat4 projMat;
uniform mat4 modelMat;

uniform float radius;
uniform vec4 lightColor;
uniform vec3 lightPos;
uniform vec3 camPos;
attribute vec3 screenDir;

varying vec3 sDir;

uniform mat4 proj_mat;
uniform mat4 mod_mat;
varying vec4 lpos;
varying vec4 Ldir;
void main()
	gl_TexCoord[0] = gl_MultiTexCoord0;
	gl_TexCoord[1] = gl_MultiTexCoord0;
	gl_TexCoord[2] = gl_MultiTexCoord0;
	gl_Position = ftransform();
	sDir = screenDir;
	vec4 lp = vec4(lightPos,1);
	lpos = lp;

LightPass frag:

float color_to_float(vec3 color)
	const vec3 byte_to_float = vec3(1.0,1.0/256.0,1.0/(256.0*256.0));
	return dot(color,byte_to_float);

vec3 lighting(vec3 SColor,vec3 SPos,float SRadius,vec3 p,vec3 n,vec3 MDiff,vec3 MSpec,float MShi)
	vec3 l = SPos-p;
	vec3 v = normalize(p);
	vec3 h = normalize(v+l);
	l = normalize(l);
	vec3 Idiff = max(0.0,dot(l,n))*MDiff*SColor;
	float att = max(0.0,1.0-length(l)/SRadius);
	vec3 Ispec = pow(max(0.0,dot(h,n)),MShi)*MSpec*SColor;
	return att*(Idiff+Ispec);

varying vec3 p;
varying vec3 sDir;
varying vec4 lpos;

uniform sampler2D normalBuffer;
uniform sampler2D depthBuffer;
uniform sampler2D colorBuffer;

uniform vec3 lightPos;
uniform vec3 camPos;
uniform float near;
uniform float far;
uniform float camDir;
uniform float lightRange;

void main()
	vec3 depthcolor= texture2D(depthBuffer,gl_TexCoord[0].st).rgb;
	vec3 n= texture2D(normalBuffer,gl_TexCoord[0].st).rgb;
	float pixelDepth = color_to_float(depthcolor);
	vec3 WorldPos =  pixelDepth *normalize(sDir);
	gl_FragColor = vec4(lighting(vec3(1,1,1),lightPos,50,WorldPos,n,vec3(0.5,0.5,0.5),vec3(1,1,1),1),128);    
	gl_FragColor *= texture2D(colorBuffer,gl_TexCoord[0].st);

I guess I’d have to use renderbuffers for depth right?
Here is how I initialize the depth part of the buffer:

if(type & BUFFER_DEPTH)
		glGenTextures(1, &this->depth);
		glBindTexture(GL_TEXTURE_2D, this->depth);
		glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, Width, Height, 0,GL_RGBA, GL_FLOAT, 0);
		glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, this->bufferID);
		GLuint d;
		glGenRenderbuffersEXT(1, &d);
		glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, d);
		glRenderbufferStorageEXT(GL_RENDERBUFFER_EXT,GL_DEPTH_COMPONENT24, Width, Height);
		glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, bufferID);
		glBindRenderbufferEXT(GL_RENDERBUFFER_EXT, d);

I’m quite sure that this is not good. Could you guys suggest another way?

I think I’m headed in the correct way.
I modified the depth component as this

if(type & BUFFER_DEPTH)
		glGenTextures(1, &this->depth);
		glBindTexture(GL_TEXTURE_2D, this->depth);


Now I have this:

And from what I have seen I think it’s the correct way to go.
However I still can’t get anything to show up.
I have my lights in world space so I have to calculate each fragment’s position in world space right?

I have this fragment code for the light pass:

uniform float near;
uniform float far;
uniform vec2 bufferSize;
uniform sampler2D depthTex;
uniform sampler2D normalTex;
varying vec4 viewLight;

float DepthToZPosition(in float depth) 
	return near / (far - depth * (far - near)) * near;

vec3 lighting(vec3 SColor,vec3 SPos,float SRadius,vec3 p,vec3 n,vec3 MDiff,vec3 MSpec,float MShi)
	vec3 l = SPos-p;
	vec3 v = normalize(p);
	vec3 h = normalize(v+l);
	l = normalize(l);
	vec3 Idiff = max(0.0,dot(l,n))*MDiff*SColor;
	float att = max(0.0,1.0-length(l)/SRadius);
	vec3 Ispec = pow(max(0.0,dot(h,n)),MShi)*MSpec*SColor;
	return att*(Idiff+Ispec);

void main()

	vec4 depth = texture2D(depthTex, gl_TexCoord[0].st);		
	vec3 screencoord; 
	vec4 normalBuf = texture2D(normalTex, gl_TexCoord[0].st);	
	screencoord = vec3(((gl_FragCoord.x/bufferSize.x)-0.5) * 2.0,((-gl_FragCoord.y/bufferSize.y)+0.5) * 2.0 / (bufferSize.x/bufferSize.y),DepthToZPosition( depth.x ));
	screencoord.x *= screencoord.z; 
	bufferSize.y *= -screencoord.z;
	gl_FragColor = vec4(lighting(vec3(1,1,1),,200.0,screencoord,normalize(normalBuf),vec3(1,1,1),vec3(1,1,1),128),1);

Please could someone help me here?
I know my normals might not be good but I’ll get to that when some light will actually be calculated

I haven’t had time to read all of this, but I think you should be passing in vec4 viewLight in viewspace, since you are reconstructing screencoord in viewspace.

something like this:
viewLight = (light position) * (modelview matrix for camera in geometry pass)


Okay I managed to make it better. I fixed my normals and everything.
So now I can’t figure how to place a light that has world space coordinates in view space.
After I set up my camera I get the projection matrix and the modelviewmatrix with:


So basically this should be the setup that I use in the geometry space right?
I draw my goemtry pass like normal.
Now when I get to the light shader I use a uniform to send modf and projf to it.

Then in vert I have

viewLight = proj_matmod_matvec4(lightPos,1);

where lightPos is the light in the worldspace and viewLight is supossed to be the light in viewspace. However when I move my camera or rotate it still keeps changing.

I really have no ideea I’m doing wrong. I understood the basic ideea of a deferred renderer and I know how to transform vertices from worldspace in viewspace but this seems odly incorrect.

Also I think the normals are ok now but I’m not 100% sure.
In the geometry pass I use this to get the normal:
normal = normalize(gl_NormalMatrixgl_Normal);
Then I have this
gl_FragData[1] = vec4(normal
And after I get in the the lightpass I convert it back with
vec3 n =*2.0 -1.0;

This is right isn’t it? :slight_smile:

James I noticed you talked about something with screencoord being between in znear and zfar and the rest of the calculus being with -znear and -zfar. That can easily changed by passin -depth instead of depth in the DepthToZ function right?
nevertheless I’m not sure I have to do that. I don’t have math done in the -znear and -zfar.

Anyway could anyone tell me what I’m doing wrong?
I’ll start working on the network component till then.

Sanctus, to get it in viewspace you need to transform the vector by the camera’s modelview matrix, you don’t need the projection matrix.

Can I also suggest that you try rendering on more simple geometry, maybe just a flat plane with, for example, the light at (0,1.0,0) or something similar, it should help to debug any problems more quickly.

Hey James…
Do you mind if I ask for some of your source code to check against mine?
I just want the set up in opengl and the shaders(the geometry pass and the lighting pass)
I noticed I used a wrong variable in the DepthToZ function and now that works ok. The thing is that the lights still change as I move the camera and that shoudn’t be the case right?
I’m sure something is very mixed up but I can’t guess what.

I might have figured everything.
First of all I noticed that passing the matrix uniform (modelview) I should use false not true in the function.

The light apears to shine from the right location but when I rotate the camera it still modifies a bit.
And I was thinking it logic. I get the light location in viewspace but that’s basically relative to the camera position.
I noticed that when I rotate my camera the depth modifies a bit. which should be incorrect. Like if you rotate the head a given point in the real world still has the same distance from your eyes. I figure out that it might be from the perspective right? So then I’d have to multiply the light by the projection matrix as well. But that doesn’t seem to work well

Any ideea on this?

Hey Sanctus,

I don’t have my code with me just now. But you should have a look at this:
specifically section 3.4.3. Illumination Pass, on page 21 or so.

They are just modifying the light position by the modelview matrix of the geometry pass.

lightpos = scene->light[i].pos*g_render->view_matrix;

also see Listing 20 on page 23 for the fragment shader used during the lighting pass.

If you want a visual clue that you are reconstructing your viewspace positions correctly during the lighting pass try this:
During the geometry pass add another render target (just for debug, obviously take it out later) and set the colour of this target to be the viewspace position in the geometry stage fragment shader, it should look like a texture divided into 4 colours with each colour fading into the other in the middle.
so in the geometry pass vertex shader add something like:

varying vec4 debug_pos = vertex_from_app * modelview_matrix;

then in the geometry fragment shader

glFragData[NEW_TARGET] = debug_pos;

Then during the lighting pass instead of doing your lighting calculation, set the colour of the target to be the reconstructed viewspace position from the depth buffer, If they don’t look more less the same you have a problem with how your are recontructing positions, otherwise the problem lies elsewhere.

I hope this helps.

Okay I will try this. But when you can please post some of your code (or PM me if you don’t want to make it public. And don’t worry I’m not stealing it)

Okay I have been doing some work on this.
I did all kinds of tests for deth and position.
Now I’m certain that I get the depth and worlspace correctly.
I did a nice test drawing the length of each screencoord. and I can rotate them however I want and it will keep the same.
However now I did a test
“vec3 l = SPos-p;
return vec3(1,1,1);
return vec3(0,0,0);
so this should only show some of the geometry that’s close enough to the light.
Though it still modifies as I move/rotate my camera.

Now… I calculate my light in view position like this
viewLight = mod_mat*vec4(lightPos,1);
mod_mat is GL_MODELVIEW_MATRIX right after I set up my camera.

I really need some help here as it’s just driving me nuts


perhaps these will be of help:

on reconstructing viewspace position from depth buffer:


there has also recently been an article on GDnet about image space lighting, the article supplies source, however they are storing the viewspace position in a texture from the geometry stage.

In what way is the light source moving? is it close to where it should be?

I managed to fix it. Working smooth right now.
I had to invert the z of the light. And then I tought I could invert the z of the screencoord and then that would change the x and y. And then I moved my multiplication on the cpu so I only do it once.
I also used scissoring for the lights (I think I’ll move to actuall meshes for the lights soon as it’s faster from what I hear).
Anyway thank you for your help :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.