I’m currently working on a shader, (the classic ligthting stuff), but I can’t seem to cast the light correctly with my vertex normals according to the object’s transformation matrix.

The scene:

A simple plane (2 triangles) with 4 vertices, with 4 normals pointing up (0, 1, 0).
A light, moving up and down with a sin function (to better see the changes)

Should I passe all the variables in which coordinate system?

If I pass them in world system, should I make the calculations in which system? projection seems a bit weird, World gives me static results, so I’m a bit lots in here.

How can I transform the normal and keep it the same from everywhere I look, because if I perform N = gl_ModelViewMatrix*normal the normal vector will change depending on my perspective, and it will affect calculations, so how can I convert it from local, to world, but now view?

What should I pass to the Shader and what should I calculate to perform something like this:

What coordinate system you use is up to you and your needs. The main point is that the lighting math doesn’t make sense unless all of its inputs are in the same coordinate system.

Direct usage of world space generally should be avoided, as you can lose precision if the area of interest is far from the world center. But even that depends on how you scale your world.

My issue is that I have a plane at (0, 0, -5) world coord, and a light at (0, 1, -5) world coord:

I know that modelview * gl_Vertex is my transformed vertex

I don’t know how to go from LightPos (in world coord) to a one relative to the vertex, everything I do, I get a light 5 units diplaced from my plane, and they should be in the same x and y coords. how can I perform this? any help?

is there any way that I can unapply the light transformations from the transformations of the vertex in order to have my light in the correct position relative to the vertex?

in the image above, I could only achieve that by changing the light position (0, 1 -5) to (0, 0, 0), thus the position will be in the middle of my plane

and everytime I chanve my plane world position, the position (0, 0, 0) goes along its center

this is the point I’m using to test it “gl_ModelViewMatrix*vec4(0, 0, 0, 1)”

So what I think I need is to have the model matrix, which I don’t have by default. I only have the ModelView, and Projection.

I was traying to avoid passing too much info to the shader in order to prevent data being sent as much as I can. but if I must, should I use a uniform? or is there any other wayt to access it?

sand in order to make the correct calculations, what I think I need to do is the following:

Convert the vertex position (local) into model coordinates: modelMatrix * vertex = Correct world vertex pos

Convert the light position (world) into model coordinates: modelMatrix * lightPos = Correct world light pos

calculate the distance from the light to the vertex

and then perform dot on the normal and the light vector to the vertex.

Yep, as I said it did the trick, but I used a uniform as a test, is it the best way? every single frame a new matrix is being sent. looks like a bit of overkill, isn’t it?

The model-view matrix transforms directly from object space to eye space, bypassing “world space”.

Lighting calculations are usually performed in eye space, for two reasons:

It eliminates the need for world space, avoiding unnecessary transformations.

The eye position (which is necessary for calculating specular reflection) is at (0,0,0) in eye space, which simplifies the calculations.

What I would suggest is to transform the light positions to eye space in the application (i.e. lightPos should be in eye space), and have the shader perform the lighting calculations in eye space.

The way that fixed-function (legacy) OpenGL lighting works is that the light position set by glLight(...,GL_POSITION,...) is transformed by the model-view matrix in effect at the time of the call, and the eye-space position is stored and used for the lighting calculations.

The way that fixed-function (legacy) OpenGL lighting works is that the light position set by glLight(...,GL_POSITION,...) is transformed by the model-view matrix in effect at the time of the call, and the eye-space position is stored and used for the lighting calculations.

That is verbatim all my issues thus far, using gl_LightSource[].position before, and it worked fine, and now when trying to use other variables, it didn’t work the same way.

What I would suggest is to transform the light positions to eye space in the application (i.e. lightPos should be in eye space), and have the shader perform the lighting calculations in eye space.

Regarding this, I have some concerns,

so I still need to have to perform some extra steps on the CPU side, isn’t there any way to send those to the GPU in order to avoid any graphical calculation on the logical side?

Should I be using gl_LightSource[].position? If not, is there a better way than to be sending my light position and attributes over uniforms to the shader?

If I perform the light maths in eye space, will it work the same for distances between vectors and keeping the relativeness between them?

Well, you could send a copy of the model-view matrix to be used for the light position. But then you’d be performing the transformation for every vertex, when it only needs to be done once.

If you’re using glLight, then gl_LightSource is the mechanism to access those settings in the shader. gl_LightSource[].position is already in eye space.

The same as what? If you have distinct model, view and projection matrices (fixed-function OpenGL combines model and view into a single model-view matrix), the view matrix normally consists solely of rotations and translations, so distances between points and angles between lines aren’t changed by the view matrix. IOW, there’s no difference between world space and eye space.

You can’t normally perform lighting in object space because different objects have different transformations, and the lighting calculations require all vectors to use the same coordinate system. The model part of the model-view transformation often contains scale transformations; for this reason, object-space normals should be transformed by gl_NormalMatrix (which is the inverse-transpose of the upper-left 3x3 submatrix of the model-view matrix) rather than by gl_ModelViewMatrix. You can use gl_ModelViewMatrix if the model transformation contains only uniform scaling (the same scale factor in all directions) and you don’t care about the magnitude of the normals; but if the model transformation contains non-uniform scaling, you need to use the normal matrix to preserve the meaning of dot products.

And you can’t perform lighting calculations in any space which has a perspective projection relative to world space (i.e. they have to be done before applying the projection matrix, which is why it’s separate).

The view transformation is just the inverse of the camera transformation. With legacy OpenGL, the view transformation was typically constructed using the fact that (A·B)^{-1}=B^{-1}·A^{-1}, i.e. by inverting the individual rotations and translations and applying them in the reverse order. That was done first then any model transformation was appended to it prior to rendering operations, using gl{Push,Pop}Matrix if necessary to save and restore the view matrix. With modern OpenGL, you’d typically construct the camera transformation as for any other object then explicitly invert it (with e.g. glm::inverse) to get the view transformation.

For lights in world space, you’d set the light positions with glLight after constructing the view transformation but before applying any model transformations. The transformation to eye space is performed at the point that glLight is called, so the result is unaffected by any subsequent model transformations.

I’ve been fiddeling with the modelView transformation and got an approach that seems to differ from what @GClements presents. I’ve only tested it against the glm::lookAt(…) matrix which I understand as the modelView-matrix. Please correct me if that’s an incorrect assumption.

vector<eye-coords> = scale(-1,1,-1) x camera_rotate_matrix x inverse(camera_translate_matrix) x vector<world_coords>

This transformation equals the glm::lookAt() matrix that again lines up with the ensuing glm::projection() matrix though it’s a long time ago since I tested this step.
I’ve not managed to make inverse() or transpose() bite properly on the problem as @GClements manages to do. Maybe the inverse() on the rotate_matrix needs to be only on the upperLeft 3x3 elements, I don’t know. On the translate_matrix the inverse() just flips the signs.