I’m having some trouble with gl_ModelViewMatrixInverseTranspose and gl_NormalMatrix. I have not applied scaling to the modelview matrix, so I expect that the following three lines should do the same thing:

vec3 A=normalize(gl_ModelViewMatrix * vec4(gl_Normal,0.0f)).xyz;
vec3 B=normalize(gl_ModelViewMatrixInverseTranspose*vec4(gl_Normal,0.0f)).xyz;
vec3 C=normalize(gl_NormalMatrix * gl_Normal);

A is exactly what I expect, but B and C are not. (From visual inspection of the results, B=C which makes sense, assuming gl_NormalMatrix is the upper 3x3 of gl_ModelViewMatrixInverseTranspose.) Am I mistaken in believing that these three should produce the same vector, since no scaling is involved?

I was trying to implement very simple lighting, with a non-attenuated point light hard coded and fixed at (0,0,10). I get what I expect when I use the vector A (see above) as the normal for the diffuse shading calculation. B and C make the light source appear to be coming from (0,10,0), and the light position remains fixed relative to the object as I rotate said object, rather than stationary at (0, 0, 10).

Have I blundered, or encountered a genuine bug?

Vertex Shader:

#version 120
void main()
    vec3 lightPosition = vec3(0.0f, 0.0f, 10.0f);

    vec3 Kd = vec3(1.0f, 1.0f, 1.0f);
    vec3 lightColor = vec3(1.0f, 1.0f, 1.0f);
//    vec3 N = normalize(gl_ModelViewMatrix * vec4(gl_Normal,0.0f)).xyz;
//    vec3 N = normalize(gl_ModelViewMatrixInverseTranspose * vec4(gl_Normal, 0.0f)).xyz;
//    vec3 N = normalize(gl_NormalMatrix * gl_Normal);
    vec3 L = normalize(lightPosition -;
    gl_FrontColor.rgb = Kd * lightColor * max(0, dot(N, L));
    gl_FrontColor.a = 1.0f;
    gl_Position = ftransform();

Fragment Shader:

#version 120
void main()
    gl_FragColor = gl_Color; 

Your mistake is in assuming that inverse transpose of a matrix is equal to the original matrix, which is wrong for most cases. Normals are not transformed like the rest, I am sure you will find an explanaton if you use google.

First of all, in order for your code to be correct, the upper 3x3 part of the matrix needs to be orthonormal. normal means there should be no scaling while ortho means that there should be no skewing. If these constraints are satisfied it should be right.

Note that in
part result in a 4 component vector where the last component can be non-zero, so you’ll have to move the .xyz selection operator within the brackets for normalize.


Is it in the GLSL standard to write 0.0f?
I think you have to write 0.0 even if you have GLSL 1.20

I’ve never had any problems with using ‘f’ to indicate floating points. Actually, I would recommend using it because future hardware will support double precision.


Zengar - In general you’re correct, normally you need the inverse transpose of your transformation matrix to transform normals. I’ve tried that (B and C in the original post) and that’s exactly what is not working.

Nico - You’re right about .xyz, thanks for the catch. It’s not the root of the problem though. w=0 so it works out the same, and I get exactly the same results.
Non-uniform scaling would create a non-orthonormal matrix; My intention was to indicate that I had not done so, and I should’ve been more clear in that regard. The only transformation I’m applying (until I start to interact) is a single translation, leaving the upper 3x3 as the identity. Said interaction only consists of applying rotations. The light appears to be coming from object space +Y rather than world space +Z when I use gl_ModelViewInverseTranspose or gl_NormalMatrix to transform the normals.

V-man - #version 110 doesn’t like it, but #120 allows for it. #120 also promotes the 0 in max to 0.0f. I’ve made the cosmetic changes to try compilation under #110, but still no change.

Thanks for the help guys.

Not at all, you were clear enough. I just wanted to draw your attention to the fact that non uniform scaling is not the only cause for non-orthonormal matrices, skewing is also a problem.

In theory it’s correct so I suggest you write the matrix components into a framebuffer object with 32bit fp format and
check whether it is what you expect.


Thats a good idea; Before I had the chance to do it I had the opportunity to run my code on another mac running OS 10.4, and everything worked as expected. I’m writing this off as buggy Leopard drivers; I’ll verify by testing it on another machine as well, but I’m fairly certain at this point.

Thanks for the help :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.