GLSL Point light shader


I’m implementing a simple pint light shader in GLSL using OpenSceneGraph. I appear to have run into a problem and I can’t quite visualize what’s wrong.

Right now, I have a model loaded with a sphere rotating about it (to represent a revolving light). After implementing the point light shader (following the Orange book text in chapter 9), I had success. The model appeared to be lit correctly as the sphere rotated about it.

However, if I applied a transformation to the model (e.g. rotation on the x-axis 90 degrees), the point light should then be lighting the sides of the model. The model appears correctly transformed, but the light also appears to have been rotated with it. That is, the rotating light is still lighting the top and the bottom, not the sides, after the 90 degree rotation. (Note that I speak of the lighting effect, here. The sphere continues to rotate as it did, correctly rotating about the sides of the model).

Following the orange book, I transformed my normals and normalized them, and transformed my light position by the ModelView matrix. The rest of the code is functionally identical to the Orange book’s prescription in Chapter 9.

I’ve also shaded my converted normals (applying x, y, and z values as rgb colors) and they don’t appear to make sense of the world coordinate system that OSG uses by default…

Any thoughts?


I transformed my normals and normalized them

Do u transform the normals by the inverse transpose of the modelview matrix?

Yes. OSG provides a uniform, osg_NormalMatrix that is identical to gl_NormalMatrix.

Shader code:

void main()
	float pf, nDotVP, nDotHV, attenuation, d = 0.0;
	vec3 VP, halfVector;
	vec4 modelViewVertex = osg_ModelViewMatrix * osg_Vertex;
	vec3 ecPos3 = (vec3( modelViewVertex )) / modelViewVertex.w;

	vec3 normal = normalize( osg_NormalMatrix * osg_Normal );
	vec4 lightPos2 = osg_ModelViewMatrix * lightPos;
	VP =vec3( lightPos2 ) - ecPos3;
	d = length( VP );
	VP = normalize( VP );
	attenuation = 1.0 / (	lightConstAttn +
							lightLinAttn * d + 
							lightQuadAttn * d * d );
	nDotVP = max( 0.0, dot( normal, VP ));
	lightColor = lightAmbient + ( lightDiffuse * nDotVP * attenuation ); 

	gl_Position = osg_ProjectionMatrix * modelViewVertex;

Just to simplify code, I’ve removed the specular calcs.

Why are u dividing by w here.

vec3 ecPos3 = (vec3( modelViewVertex )) / modelViewVertex.w;

You should use the modelViewVertex directly here

VP =;

Following the orange book, there. That’s the computation for the “eye coordinate”, reducing it from a homogeneous vec4 to a non-homogeneous vec3. I’ve tried it both ways and haven’t seen an appreciable difference.

I have a bit more insight on what’s happening. When I create a model and apply the point light shader to it I expect the following behavior:

  1. When the model rotates, a different side of it is light (the model is rotated independent of the light source).

  2. When I manipulate my camera (using OSG’s “Trackball Manipulator” mechanism), both the model and the light should rotate because I’m moving about within the scene, of which the light is a part…

If I don’t transform the light position by the ModelViewMatrix, I achieve #1 but fail #2. This makes sense, since I’m specifying the light source in the view space, but not transforming it by the view matrix. It can’t move because it’s never transformed.

If I do transform the light position, I fail #1, but achieve #2, for precisely the opposite reason as above.

To put it another way, when I rotate the model, either the light somehow rotates with it, or the model’s transformation is never reflected in the shader’s computations…

My grasp of 3D transformations is a bit sketchy, but it seems to me that I want to transform the light position by the view matrix, but not the model matrix (i.e., I want the light to remain unaffected by changes in the model orientation, but affected by changes in the camera orientation).

That, anyway, is the behavior I’d naturally expect if I were setting up a scene with lighting… Perhaps my expectations are mistaken?

Make sure your shader uniform lightpos always contains a 1 in the fourth component. This is important when multiplying by the modelview matrix otherwise the light is not transformed into eye space properly.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.