# Specular highlight movement

Hello.

I am trying to implement simple directional lightning. While simple diffuse is easy thing to do, the problems come with specular component.
When viewer (camera) moves, the specular highlight is supposed to move with it. The problem is that, I can’t achieve that effect.

I have read, the correct thing to do is just multiply the normal by normal matrix, and the normal matrix is just inversed and transposed ModelView matrix.
While I’m using GLM library, i get View matrix by calling glm::LookAt with specified parameters, and multiply it with my model’s matrix to get the modelview matrix. That matrix, named NMTX in shaders is sent as uniform and is multiplied with normal in vertex shader like this:

``````fragmentNormal = normalize(NMTX * vertexNormal);
``````

Then, fragmentNormal is being sent to fragment shader:

``````
vec3 normal = fragmentNormal;

vec3 col = texture(tex, fragmentUV).rgb;
vec3 scatteredLight = vec3(0.0f);
vec3 reflectedLight = vec3(0.0f);
...
vec3 direction = normalize(lights[i].position + lights[i].target);

float diffuse = max(0.0f, dot(normal, direction));
float specular = max(0.0f, dot(normal, lights[i].halfwayVector));

if (diffuse == 0) specular = 0;
else specular = pow(specular, Shininess) * lights[i].strength;

scatteredLight += ambientLight + lights[i].color * diffuse;
reflectedLight += lights[i].color * specular;
...
outColor = reflectedLight;

``````

This code generates the specular highlights fine, but they only change when the camera rotates. I am aware as this may be coordinate space issue, as light direction and halfway vector are in model coordinates, and normals are in eye coordinates (after multiplying by the matrix in vertex shader). I tried to overcome that by multiplying theese two vectors by normal matrix too, but then the specular highlight just went completely static.

Ignoring performance issues in the above shaders (as I just want to understand the issue I described), what am I doing wrong?

Make sure all the relevant vectors (i.e. light direction, half vector and normal) are in the same coordinate space, preferably eye/camera space, and that they are normalized prior to any calculation. Code you posted is incomplete and it’s hard to tell what could be wrong. Post the complete shader code.

I believe all the directions are in eye space and they are normalized.

Here are both shaders. They contain some unused code, but don’t mind it, as it just were my experiments.

``````#version 430

layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal;

out vec2 fragmentUV;
out vec3 fragmentNormal;
out vec3 fragmentVertPosition;

uniform mat4 MVP;
uniform mat4 MV;
uniform mat3 NMTX;
uniform mat2 TEXMTX = mat2(1.0f);

void main()
{
gl_Position = MVP * vec4(vertexPosition, 1.0f);
fragmentNormal = normalize(NMTX * vertexNormal);
fragmentVertPosition = vertexPosition;
fragmentUV = TEXMTX * vertexUV;
}

``````

``````#version 430

const int MAX_LIGHTS = 5;

const unsigned int LIGHT_OFF = 1;
const unsigned int LIGHT_GLOBAL = 2;
const unsigned int LIGHT_SPOTLIGHT = 4;
const unsigned int LIGHT_POINT = 8;

uniform LightProperties
{
unsigned int type;

vec3 color;
vec3 position;
vec3 target;
vec3 halfwayVector;

float strength;
float spotlightCosCutoff;
float spotlightExponent;

float constAttenuation;
float linearAttenuation;
}lights[MAX_LIGHTS];

in vec2 fragmentUV;
in vec3 fragmentNormal;
in vec3 fragmentVertPosition;

out vec3 outColor;

uniform sampler2D tex;
uniform sampler2D normalMap;
uniform bool useNormalMap;

uniform vec3 ambientLight;
uniform float Shininess;
uniform vec3 EyeDirection;
uniform mat3 NMTX;
uniform mat4 MODELMTX;
uniform unsigned int LightCount = 0;

void main()
{
//vec3 normal = normalize((NMTX * vec4(fragmentNormal, 0.0f)).xyz);
vec3 normal = fragmentNormal;

vec3 col = texture(tex, fragmentUV).rgb;
vec3 scatteredLight = vec3(0.0f);
vec3 reflectedLight = vec3(0.0f);

for (unsigned int i = 0; i < LightCount; ++i)
{
if (lights[i].type == LIGHT_OFF)
continue;
else if (lights[i].type == LIGHT_GLOBAL)
{
vec3 lPos = lights[i].position;
vec3 lTgt = lights[i].target;
vec3 direction = normalize(NMTX * normalize(lPos - lTgt));

float diffuse = max(0.0f, dot(normal, direction));
float specular = max(0.0f, dot(normal, normalize(NMTX * lights[i].halfwayVector)));

if (diffuse == 0) specular = 0;
else specular = pow(specular, Shininess) * lights[i].strength;

scatteredLight += ambientLight + lights[i].color * diffuse;
reflectedLight += lights[i].color * specular;
}
}

outColor = min(col * scatteredLight + reflectedLight, vec3(1.0f));
outColor = reflectedLight;
}

``````

This looks like it should work fine if everything is in camera space. Let’s see how you calculate the normal matrix and the half vector.

Note that if you want fixed light direction in world space, you shouldn’t multiply the light vector with the normal matrix because it will rotate the light together with the model, which, if I correctly interpreted your description, might be precisely what’s happening.

The normal matrix calculation:

``````NMTX = glm::mat3(glm::inverse(glm::transpose(MV)));
``````

Where MV is the modelview matrix, where as i spoke i get view matrix from glm::LookAt function.

I wasn’t really sure how to calculate normal vector, as few resources were telling different things. I used this one:

``````
glm::vec3 dirV = game->defaultCamera->GetDirection();
glm::vec3 dirL = glm::normalize(lights[i].properties.position - lights[i].properties.target);
glm::vec3 halfwayVector = glm::normalize(dirL + dirV);

``````

I must say I have sent modified fragment shader in the previous post - that version makes the reflection to be completely static - with no rotations, and with no movement as well.

Multiplying halfvector and light direction with normal matrix might be the problem. To bring these vectors to eye space you should multiply them with inverse transpose of the view matrix, not the modelview matrix. To avoid confusion, best to do this prior to sending it to shader.

So on cpu side:

• do the camera (view) transformations
• the current matrix now represents the view matrix (world-to-camera matrix)
• calculate the inverse transpose of the view matrix (world-to-camera normal matrix)
• do the model transformations on top of camera transformations
• the current matrix now represents the modelview matrix (model-to-camera matrix, i.e. your regular modelview matrix)
• calculate the inverse transpose of the modelview matrix (model-to camera normal matrix, i.e. your regular normal matrix)

- send model-to-camera matrix as modelview matrix to the shader (as you’re already doing)
- send model-to-camera normal matrix as normal matrix to the shader (as you’re already doing)

• calculate light direction and halfvector in world space (as you’re already doing)
• multiply halfvector and light direction with world-to-camera normal matrix to bring them to camera space
- send camera space halfvector and light direction to shader

• multiply the input normal with normal matrix and send it as varying to fragment shader (as you’re already doing)

• calculate the specular term using normal (which was transformed to camera space in vertex shader), light direction and halfvector (which were transformed to camera space on the cpu side)

So, just to clarify. Correct me if I’m wrong, and I’m sure I am somewhere Doing the camera transformations, which you mentioned, is equivalent to calling glm::LookAt function, right?

After I got that matrix I calculate the model-view matrix and send it inversed and transposed to the shader and use it only for the normals.
Here comes my another question - do I calculate model-view matrix with regular view matrix, or with inverse transposed one?

Then i calculate half vector and light direction by multiplying them by inversed transposed view matrix. The half vector needs to be calculated from camera’s direction and light’s direction, so do we calculate it with the original light direction, or with the multiplied one?

When I do all of these things, I still got static reflection. Here is the code. I have moved the matrix calculations to shaders to make the code simpler to understand for me just for now.

Calculating the matrices:

``````
glm::mat4 v, p;  // v - View matrix, p - Projection matrix
game->defaultCamera->GetMtx(&p, &v);
MVP = p * v * modelMatrix; // Model-View-Projection
MV = v * modelMatrix;        // Model-View
NMTX = glm::mat3(glm::inverse(glm::transpose(MV)));  //

``````

Calculating the direction and half vector

``````
...
properties.direction = glm::normalize(properties.position - properties.target);
...
glm::vec3 dirV = game->defaultCamera->GetDirection(); // equivalent to glm::normalize(defaultCamera->position - defaultCamera->target);
glm::vec3 dirL = glm::normalize(lights[i].properties.position - lights[i].properties.target);
glm::vec3 halfwayVector = glm::normalize(dirL + dirV);

``````

Then it all is sent to shaders. Vertex shader does exactly the same work as previously, so I don’t attach it here. Here is the fragment shader:

``````
void main()
{
vec3 normal = fragmentNormal;

vec3 col = texture(tex, fragmentUV).rgb;
vec3 scatteredLight = vec3(0.0f);
vec3 reflectedLight = vec3(0.0f);

for (unsigned int i = 0; i < LightCount; ++i)
{
if (lights[i].type == LIGHT_OFF)
continue;
else if (lights[i].type == LIGHT_GLOBAL)
{
float diffuse = max(0.0f, dot(normal, normalize((MV * vec4(lights[i].direction, 0.0f)).xyz)));
float specular = max(0.0f, dot(normal, normalize((MV * vec4(lights[i].halfwayVector, 0.0f)).xyz)));

if (diffuse == 0) specular = 0;
else specular = pow(specular, Shininess) * lights[i].strength;

scatteredLight += ambientLight + lights[i].color * diffuse;
reflectedLight += lights[i].color * specular;
}
}

outColor = min(col * scatteredLight + reflectedLight, vec3(1.0f));
outColor = reflectedLight;
}

``````

Did I miss something?

I can’t also quite understand one thing: what is the purpose of performing inverse-transpose on matrices?

[QUOTE=overTaker;1257648]So, just to clarify. Correct me if I’m wrong, and I’m sure I am somewhere Doing the camera transformations, which you mentioned, is equivalent to calling glm::LookAt function, right?[/quote]
Yes.

Yes

[QUOTE=overTaker;1257648]
Here comes my another question - do I calculate model-view matrix with regular view matrix, or with inverse transposed one?[/quote]
Regular.

The important thing is to be consistent with your coordinate spaces when doing calculations. If your camera view direction and light direction are given in world space, when you add them you get the halfvector in world space. Now all these 3 vectors are in world space. You need them in camera space for the shader, so you multiply them with world-to-camera matrix and send them to the shader (or do it in shader if you prefer). Direct answer to your question is: you can either:

``````
halfvector_worldSpace = lightDirection_worldSpace + viewDirection_worldSpace;
halfvector_cameraSpace = matrixWorldToCamera * halfvector_worldSpace;
lightDirection_cameraSpace = matrixWorldToCamera * lightDirection_worldSpace;
viewDirection_cameraSpace = matrixWorldToCamera * viewDirection_worldSpace; // should calculate to (0, 0, 1) when normalized

``````

or

``````
lightDirection_cameraSpace = matrixWorldToCamera * lightDirection_worldSpace;
viewDirection_cameraSpace = matrixWorldToCamera * viewDirection_worldSpace;
halfvector_cameraSpace = lightDirection_cameraSpace + viewDirection_cameraSpace;

``````

In both cases ending up with same values for lightDirection_cameraSpace and halfvector_cameraSpace, which are the vectors you need to calculate the specular term.

Yes You’re still multiplying your light direction and halfvector with the modelview matrix instead of view matrix. So instead of:

``````
float diffuse = max(0.0f, dot(normal, normalize((MV * vec4(lights[i].direction, 0.0f)).xyz)));
float specular = max(0.0f, dot(normal, normalize((MV * vec4(lights[i].halfwayVector, 0.0f)).xyz)));

``````

You should do:

``````
float diffuse = max(0.0f, dot(normal, normalize((V * vec4(lights[i].direction, 0.0f)).xyz)));
float specular = max(0.0f, dot(normal, normalize((V * vec4(lights[i].halfwayVector, 0.0f)).xyz)));

``````

Where V is the view matrix you get from your camera after calling LookAt().

[QUOTE=overTaker;1257648]
I can’t also quite understand one thing: what is the purpose of performing inverse-transpose on matrices?[/QUOTE]
To get rid of problems with non-proportional scaling of vectors. For simplicity, you can skip the inverse transposes if you’re not using non-proportional scaling. More on this here: http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/
And now you’re left with only 3 matrices:
modelview matrix - used to transform normals from model space to camera space
modelviewprojection matrix - used to transform vertices from model space to screen space
view matrix - used to transform light vectors from world space to camera space

I slowly begin to understand it.

From what I can tell, the simply most important thing is to make sure, all the coordinate spaces for all the input vectors - normal, half vector and light direction are consistent. So, they don’t really need to be in world space at the very beginning. Everything can be done on model space (as it is the simplest one in my code to use).
In fact, the strange thing for me here is that, if I take away the model matrix from anything, it doesn’t really make any difference (well, except for MVP matrix of course, but it isn’t the case in this topic). I wonder why is that.

I was able to achieve effect I wanted to have, by doing what you adviced, but instead of this:

``````
float diffuse = max(0.0f, dot(normal, normalize((V * vec4(lights[i].direction, 0.0f)).xyz)));
float specular = max(0.0f, dot(normal, normalize((V * vec4(lights[i].halfwayVector, 0.0f)).xyz)));

``````

I used this:

``````
float diffuse = max(0.0f, dot(normal, normalize((-V * vec4(lights[i].direction, 1.0f)).xyz)));
float specular = max(0.0f, dot(normal, normalize((-V * vec4(lights[i].halfwayVector, 1.0f)).xyz)));

``````

So all three input vectors are in their model (or better called local) coordinates.

And also made normal multiplied by inverse transposed view matrix, instead of inverse transposed model view matrix (I have learned that the model matrix in the equation doesn’t change anything).

The difference in above snippets is simply that I made W component to be 1, instead of 0. This is very strange to me as well. I have read that 0 component is used for directions, as when such a vector is multiplied with matrix, the translation is ignored. Here, the vectors are both directions, so why is the W = 1.0f working fine?

I can still tell, the effect I got this way, is much too sharp, than it propably should be.

Well w should be 0.0 when you’re multiplying a vector. The difference that actually made it work is, I think, the minus sign. Which makes sense since you’re calculating the light direction vector as position-target instead of properly: target-position. This reverses the light direction and probably screws the halfvector too if the view vector is properly directed. So try to reverse the operands when calculating the direction vector, keep w at 0.0 and remove the minus sign:

``````
glm::vec3 dirL = glm::normalize(lights[i].properties.target - lights[i].properties.position );

``````
``````
float diffuse = max(0.0f, dot(normal, normalize((V * vec4(lights[i].direction, 0.0f)).xyz)));
float specular = max(0.0f, dot(normal, normalize((V * vec4(lights[i].halfwayVector, 0.0f)).xyz)));

``````

If your model is not transformed relative to world space then your model matrix will be identity and your view matrix will be equal to modelview matrix. But normally you want your shader to behave properly on transformed models so distinction between modelview and view mactices is necessary.

That explained a lot While it’s time to rearange my code to make it easier to perform the calculations, which is pretty hard right now I must thank you for taking a time to explain theese things to me!

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.