 # Lighting shader with weird results

Hi there,

I’m currently working on a shader, (the classic ligthting stuff), but I can’t seem to cast the light correctly with my vertex normals according to the object’s transformation matrix.

The scene:

A simple plane (2 triangles) with 4 vertices, with 4 normals pointing up (0, 1, 0).
A light, moving up and down with a sin function (to better see the changes)

The Vertex shader:

``````in vec4 position;
in vec4 normal;
in vec2 texCoords;

varying vec2 vTex;
varying vec3 N;
varying vec3 v;
varying vec4 light;
uniform vec3 lightPos;

void main(){
mat4 mvp    = gl_ProjectionMatrix * gl_ModelViewMatrix;
vTex   	    = texCoords;

v 	  = vec3(mvp * gl_Vertex);
N 	  = normalize(vec3(normal));
light = vec4(lightPos, 0.1);

gl_Position = mvp * gl_Vertex;
}
``````

The Fragment Shader

``````uniform sampler2D myTexture;
varying vec2 vTex;
varying vec3 N;
varying vec3 v;
varying vec4 light;

void main() {
vec3 L = normalize(light - v);
vec4 d = vec4(1.0, 1.0, 1.0, 1.0) * max(dot(N,L), 0);

vec4 texColor = texture2D(myTexture, vTex);
gl_FragColor  = d;
}
``````

The variables

lightPos: is the lioght postion in world coordinates.

The result

Video: Preview

Image:

Any tips?

best regards

You need to be consistent with coordinate systems in your vector math.

If `lightPos` is in WORLD space, then so should `v` and `N` be. However `v` is in CLIP space. And it’s not at all clear what space `N` is in.

OK, so basically I have some doubts:

What coordinate system you use is up to you and your needs. The main point is that the lighting math doesn’t make sense unless all of its inputs are in the same coordinate system.

Direct usage of world space generally should be avoided, as you can lose precision if the area of interest is far from the world center. But even that depends on how you scale your world.

My issue is that I have a plane at (0, 0, -5) world coord, and a light at (0, 1, -5) world coord:

• I know that modelview * gl_Vertex is my transformed vertex
• I don’t know how to go from LightPos (in world coord) to a one relative to the vertex, everything I do, I get a light 5 units diplaced from my plane, and they should be in the same x and y coords. how can I perform this? any help?

is there any way that I can unapply the light transformations from the transformations of the vertex in order to have my light in the correct position relative to the vertex?

in the image above, I could only achieve that by changing the light position (0, 1 -5) to (0, 0, 0), thus the position will be in the middle of my plane

and everytime I chanve my plane world position, the position (0, 0, 0) goes along its center

this is the point I’m using to test it “gl_ModelViewMatrix*vec4(0, 0, 0, 1)

So what I think I need is to have the model matrix, which I don’t have by default. I only have the ModelView, and Projection.

I was traying to avoid passing too much info to the shader in order to prevent data being sent as much as I can. but if I must, should I use a uniform? or is there any other wayt to access it?

sand in order to make the correct calculations, what I think I need to do is the following:

1. Convert the vertex position (local) into model coordinates: modelMatrix * vertex = Correct world vertex pos

2. Convert the light position (world) into model coordinates: modelMatrix * lightPos = Correct world light pos

3. calculate the distance from the light to the vertex

4. and then perform dot on the normal and the light vector to the vertex.

is that correct?

Yep, as I said it did the trick, but I used a uniform as a test, is it the best way? every single frame a new matrix is being sent. looks like a bit of overkill, isn’t it?

The model-view matrix transforms directly from object space to eye space, bypassing “world space”.

Lighting calculations are usually performed in eye space, for two reasons:

1. It eliminates the need for world space, avoiding unnecessary transformations.
2. The eye position (which is necessary for calculating specular reflection) is at (0,0,0) in eye space, which simplifies the calculations.

What I would suggest is to transform the light positions to eye space in the application (i.e. `lightPos` should be in eye space), and have the shader perform the lighting calculations in eye space.

The way that fixed-function (legacy) OpenGL lighting works is that the light position set by `glLight(...,GL_POSITION,...)` is transformed by the model-view matrix in effect at the time of the call, and the eye-space position is stored and used for the lighting calculations.

GClements said:

The way that fixed-function (legacy) OpenGL lighting works is that the light position set by `glLight(...,GL_POSITION,...)` is transformed by the model-view matrix in effect at the time of the call, and the eye-space position is stored and used for the lighting calculations.

That is verbatim all my issues thus far, using gl_LightSource[].position before, and it worked fine, and now when trying to use other variables, it didn’t work the same way.

many thanks for your extra explanation.

GClements said:

What I would suggest is to transform the light positions to eye space in the application (i.e. `lightPos` should be in eye space), and have the shader perform the lighting calculations in eye space.

Regarding this, I have some concerns,

• so I still need to have to perform some extra steps on the CPU side, isn’t there any way to send those to the GPU in order to avoid any graphical calculation on the logical side?
• Should I be using gl_LightSource[].position? If not, is there a better way than to be sending my light position and attributes over uniforms to the shader?
• If I perform the light maths in eye space, will it work the same for distances between vectors and keeping the relativeness between them?

Many thans for your help GClements

Thanks, that did it: But regarding my other question, is tehre a way to convert world space light position int camera space in the shader?

Basically I have ModelPosition of the camera and need ModelViewPosition.

Well, you could send a copy of the model-view matrix to be used for the light position. But then you’d be performing the transformation for every vertex, when it only needs to be done once.

If you’re using `glLight`, then `gl_LightSource` is the mechanism to access those settings in the shader. `gl_LightSource[].position` is already in eye space.

The same as what? If you have distinct model, view and projection matrices (fixed-function OpenGL combines model and view into a single model-view matrix), the view matrix normally consists solely of rotations and translations, so distances between points and angles between lines aren’t changed by the view matrix. IOW, there’s no difference between world space and eye space.

You can’t normally perform lighting in object space because different objects have different transformations, and the lighting calculations require all vectors to use the same coordinate system. The model part of the model-view transformation often contains scale transformations; for this reason, object-space normals should be transformed by `gl_NormalMatrix` (which is the inverse-transpose of the upper-left 3x3 submatrix of the model-view matrix) rather than by `gl_ModelViewMatrix`. You can use `gl_ModelViewMatrix` if the model transformation contains only uniform scaling (the same scale factor in all directions) and you don’t care about the magnitude of the normals; but if the model transformation contains non-uniform scaling, you need to use the normal matrix to preserve the meaning of dot products.

And you can’t perform lighting calculations in any space which has a perspective projection relative to world space (i.e. they have to be done before applying the projection matrix, which is why it’s separate).

The view transformation is just the inverse of the camera transformation. With legacy OpenGL, the view transformation was typically constructed using the fact that (A·B)-1=B-1·A-1, i.e. by inverting the individual rotations and translations and applying them in the reverse order. That was done first then any model transformation was appended to it prior to rendering operations, using `gl{Push,Pop}Matrix` if necessary to save and restore the view matrix. With modern OpenGL, you’d typically construct the camera transformation as for any other object then explicitly invert it (with e.g. `glm::inverse`) to get the view transformation.

For lights in world space, you’d set the light positions with `glLight` after constructing the view transformation but before applying any model transformations. The transformation to eye space is performed at the point that `glLight` is called, so the result is unaffected by any subsequent model transformations.

I’ve been fiddeling with the modelView transformation and got an approach that seems to differ from what @GClements presents. I’ve only tested it against the glm::lookAt(…) matrix which I understand as the modelView-matrix. Please correct me if that’s an incorrect assumption.

`vector<eye-coords> = scale(-1,1,-1) x camera_rotate_matrix x inverse(camera_translate_matrix) x vector<world_coords>`

This transformation equals the glm::lookAt() matrix that again lines up with the ensuing glm::projection() matrix though it’s a long time ago since I tested this step.
I’ve not managed to make inverse() or transpose() bite properly on the problem as @GClements manages to do. Maybe the inverse() on the rotate_matrix needs to be only on the upperLeft 3x3 elements, I don’t know. On the translate_matrix the inverse() just flips the signs.