# Lateral side of the cube being lit by specular light

The specular light is affecting the lateral sides of the cube, when it shouldn’t because the light is in front of the cube. The camera is at position (0, 0, 3), the light at (0, 0, 5). The model is the identity matrix, thus the cube is at (0, 0, 0). I’m calculating everything in the view space. I’ve made some classes in order to reuse code and to abstract some concepts. I’ll put some code lines that I think are important to solve the problem and some images to show exactly what I’m talking about.

``````
...
void main()
{
gl_Position = mvpMatrix * inPosition;
viewPosition = (modelView * inPosition).xyz;
//normal = mat3(modelView) * inNormal;
normal = mat3(transpose(inverse(modelView))) * inNormal;
}

``````

``````
...
void main()
{
vec3 ambientColor = material.ambient_ * light.ambient_;

vec3 lightDirection = normalize(light.position_ - viewPosition);
vec3 normalNormalized = normalize(normal);
float dotNormalLightDir = dot(normalNormalized, lightDirection);
float diffuseFactor = max(dotNormalLightDir, 0.0f);

if (dotNormalLightDir > 0.0f)
{
vec3 diffuseColor = diffuseFactor * material.diffuse_ * light.diffuse_;

vec3 viewerDirection = normalize(-viewPosition);
vec3 reflectedDirection = reflect(-lightDirection, normalNormalized);
float specularFactor = pow(max(dot(viewerDirection, reflectedDirection), 0.0f), material.shininess_);
vec3 specularColor = (specularFactor * material.specular_) * light.specular_;

color = clamp(ambientColor + diffuseColor + specularColor, 0.0f, 1.0f);
}
else
{
color = clamp(ambientColor, 0.0f, 1.0f);
}
}

``````

OpenGLCube.cpp:

``````
...
const float length = 0.5f;
const std::vector<glm::vec3> positions =
{
//front
{ -length, -length,  length },
{ length,  -length,  length },
{ length,  length,  length },
{ -length, length,  length },
//back
{ -length, -length, -length },
{ length,  -length, -length },
{ length,  length, -length },
{ -length, length, -length },
//front
{ -length, -length,  length },
{ length,  -length,  length },
{ length,  length,  length },
{ -length, length,  length },
//back
{ -length, -length, -length },
{ length,  -length, -length },
{ length,  length, -length },
{ -length, length, -length },
//front
{ -length, -length,  length },
{ length,  -length,  length },
{ length,  length,  length },
{ -length, length,  length },
//back
{ -length, -length, -length },
{ length,  -length, -length },
{ length,  length, -length },
{ -length, length, -length }
};

const std::vector<GLuint> indices =
{
// front
0, 1, 2,
2, 3, 0,
// top
11, 10, 6,
6, 7, 11,
// back
15, 14, 5,
5, 4, 15,
// bottom
12, 13, 9,
9, 8, 12,
// left
20, 16, 19,
19, 23, 20,
// right
17, 21, 22,
22, 18, 17
};

const std::vector<glm::vec3>& normals(GetNormalsByFace(positions, indices));

``````

I calculate the normals through this method:

``````
std::vector<glm::vec3> GetNormalsByFace(const std::vector<glm::vec3>& vertices, const std::vector<GLuint>& indices)
{
size_t nTriangles(indices.size() / 3);
std::vector<glm::vec3> normals(vertices.size());

for (size_t i = 0; i < nTriangles; ++i)
{
const int i3 = i * 3;
GLint v1(indices[i3]), v2(indices[i3 + 1]), v3(indices[i3 + 2]);

const glm::vec3& u1(vertices[v1]), u2(vertices[v2]), u3(vertices[v3]);

glm::vec3 a1(u2 - u1), a2(u3 - u1);
glm::vec3 normal(glm::cross(a1, a2));

normals[v1] = normal;
normals[v2] = normal;
normals[v3] = normal;
}

return normals;
}

``````

This method seems correct as I tested some values. If you need more code for me to add here, I will gladly do it. Thanks!

@edit

Sorry everyone, I forgot to put the images. Here they are:

That’s from one of the sides of the cube, when I rotate the camera. That dark side is the back face. I think it is wrong:
[ATTACH=CONFIG]1482[/ATTACH]

That’s the front of the cube. I think it is ok:
[ATTACH=CONFIG]1481[/ATTACH]

It’s also good to say that I update the light position at every frame by multiplying it by the view matrix as I mentioned below. I’m going to put here some images of an icosphere instead of the cube. It seems to be correctly lit, but I’m not sure though. If it is, then probably the problem is with the cube.

Front of the icosphere:
[ATTACH=CONFIG]1483[/ATTACH]

Side of the icosphere:
[ATTACH=CONFIG]1484[/ATTACH]

Almost back of the icosphere:
[ATTACH=CONFIG]1485[/ATTACH]

Why this ?

vec3 lightDirection = normalize(light.position_ - viewPosition);

If your light is a point light, then it spreads in all directions from its location. And in the fragment shader you are interested in checking if light rays arrive to the current fragment, and if so, how much of reflectance you’ll get.

In which case, you shouldn’t be able to see any part of the cube except for the front face.

Note that light.position_ is in eye-space, so if you move the viewpoint by changing the model-view matrix (and don’t change light.position_ itself), the light position will move with it.

If you intend light.position_ to be in world-space, you need to transform it by the model-view matrix in the vertex shader. Or you can transform the light position in the client code; in which case light.position_ itself will be in eye-space but its value will change in order to maintain a fixed world-space position.

[QUOTE=Silence;1287514]Why this ?

If your light is a point light, then it spreads in all directions from its location. And in the fragment shader you are interested in checking if light rays arrive to the current fragment, and if so, how much of reflectance you’ll get.[/QUOTE]

Isn’t what you said afterwards the answer to your question? I use that code/variable in order to do the necessary calculations to simulate diffuse lighting and specular lighting.

[QUOTE=GClements;1287516]In which case, you shouldn’t be able to see any part of the cube except for the front face.

Note that light.position_ is in eye-space, so if you move the viewpoint by changing the model-view matrix (and don’t change light.position_ itself), the light position will move with it.

If you intend light.position_ to be in world-space, you need to transform it by the model-view matrix in the vertex shader. Or you can transform the light position in the client code; in which case light.position_ itself will be in eye-space but its value will change in order to maintain a fixed world-space position.[/QUOTE]

I’ve made a code to allow me to rotate the camera around. That’s how I saw the other faces. When I rotate the camera using the mouse, I update the view matrix. For now, I have this code in the function that is called each frame to draw:

``````
glm::mat3 view3x3(view_);
fluid::LightSource light(view3x3 * glm::vec3(0.0f, 0.0f, 5.0f));
...
glUniform3fv(lightPositionLoc, 1, &light.GetPosition());

``````

where view_ is the glm::mat4 view matrix. So I am changing the light position together with the view matrix. I don’t multiply the light position by the model matrix, it is like it’s model matrix is the identity matrix. Should I multiply it as well?

@edit
I’m sorry, I thought I had put the images in the post.

As I’m not being able to edit the original post for some reason, I’m creating this new reply. I’ve a question: shouldn’t the lateral sides of the cube be lit only by the ambient light?

I’ve made new tests in the fragment shader calculating the color as:

``````
color = vec3(diffuseFactor);
/
``````

and

``````
color = vec3(dotNormalLightDir);
/
``````

I will put the resulting images here. Again, I think the icosphere got reasonable results, while the cube didn’t.

Front of the cube:
[ATTACH=CONFIG]1486[/ATTACH]

Lateral side of the cube:
[ATTACH=CONFIG]1487[/ATTACH]

Front of the icosphere:
[ATTACH=CONFIG]1488[/ATTACH]

Lateral side of the icosphere:
[ATTACH=CONFIG]1489[/ATTACH]

Also, I’ve corrected something in the algorithm made to calculate the normals. I wasn’t normalizing. So I changed the code from

``````
glm::vec3 normal(glm::cross(a1, a2));

``````

to

``````
glm::vec3 normal(glm::normalize(glm::cross(a1, a2)));

``````

I’ve also tried to inform the normals by “hand”:

``````
const std::vector<glm::vec3>& normals =
{
{ 0, 0, 1 },
{ 0, 0, 1 },
{ 0, 0, 1 },
{ 0, 0, 1 },
{ 0, 0, -1 },
{ 0, 0, -1 },
{ 0, 1, 0 },
{ 0, 1, 0 },
{ 0, -1, 0 },
{ 0, -1, 0 },
{ 0, 1, 0 },
{ 0, 1, 0 },
{ 0, -1, 0 },
{ 0, -1, 0 },
{ 0, 0, -1 },
{ 0, 0, -1 },
{ -1, 0, 0 },
{ 1, 0, 0 },
{ 1, 0, 0 },
{ -1, 0, 0 },
{ -1, 0, 0 },
{ 1, 0, 0 },
{ 1, 0, 0 },
{ -1, 0, 0 },
};

``````

The results are the same [QUOTE=YardenJ2R;1287522]I’ve made a code to allow me to rotate the camera around. That’s how I saw the other faces. When I rotate the camera using the mouse, I update the view matrix. For now, I have this code in the function that is called each frame to draw:

``````
glm::mat3 view3x3(view_);
fluid::LightSource light(view3x3 * glm::vec3(0.0f, 0.0f, 5.0f));
...
glUniform3fv(lightPositionLoc, 1, &light.GetPosition());

``````

where view_ is the glm::mat4 view matrix. So I am changing the light position together with the view matrix. I don’t multiply the light position by the model matrix, it is like it’s model matrix is the identity matrix. Should I multiply it as well?
[/QUOTE]
The light position is a position, not a direction. Its W coordinate should be 1 and any transformations should use a full 4x4 matrix.

If you want the light position to be fixed relative to the cube, the light position should be transformed by the same model-view matrix as the cube’s vertices.

If you want the light position to be fixed relative to the viewer, the light position shouldn’t be transformed. In that case, rotating the cube will result in the highlights moving onto the other sides of the cube as they come to face towards the light.

``````
And now it seems to be working. So, thanks a lot everyone, I REALLY appreciate it! 