strange lighting

These 2 shaders are giving me very strange results. They are supposed to implement Phong shading. Please help (currently I am shading a sphere, so I calculate the normals in the shader).


#version 120

// viewing transformations
uniform mat4 mv_matrix;
uniform mat4 p_matrix;

// lights
uniform vec3 ws_light_position;
uniform vec3 light_color;

// ambient values
uniform float ambient_coefficient;
uniform vec3 ambient_color;

// diffuse values
uniform float diffuse_coefficient;
uniform vec3 diffuse_color;

// attributes
attribute vec3 ws_vertex_position;

varying vec3 diffuse_color_;
varying vec3 ws_vertex_position_;

void main()
{
  gl_Position = p_matrix * mv_matrix * vec4(ws_vertex_position, 1);

  vec3 ws_N = normalize(ws_vertex_position);
  vec3 ws_L = normalize(ws_light_position - ws_vertex_position);

  diffuse_color_ = ambient_coefficient * ambient_color +
    light_color * diffuse_coefficient * diffuse_color *
    max(dot(ws_L, ws_N), 0);

  ws_vertex_position_ = ws_vertex_position;
}


#version 120

// viewing transformations
uniform mat4 mv_matrix;

// lights
uniform vec3 ws_light_position;
uniform vec3 light_color;

// specular values
uniform float specular_coefficient;
uniform vec3 specular_color;
uniform float specular_n;

// attributes
varying vec3 diffuse_color_;
varying vec3 ws_vertex_position_;

void main()
{
  vec3 ws_viewer_position = vec3(mv_matrix[3]);

  vec3 ws_N = normalize(ws_vertex_position_);
  vec3 ws_L = normalize(ws_light_position - ws_vertex_position_);
  vec3 ws_R = 2 * dot(ws_L, ws_N) * ws_N - ws_L;
  vec3 ws_V = normalize(ws_viewer_position - ws_vertex_position_);

  float RdotV = dot(ws_R, ws_V);

  if (RdotV > 0)
  {
    gl_FragColor = vec4(diffuse_color_ +
      light_color * specular_coefficient * specular_color *
      pow(dot(ws_R, ws_V), specular_n), 1);
  }
  else
  {
    gl_FragColor = vec4(diffuse_color_, 1);
  }
}

Here’s the screenshot. Note the specular highlight being away from the light, also the specular highlight looks weird to me.

screenshot1

apparently the problem was this line:

vec3 ws_viewer_position = vec3(mv_matrix[3]);

which I’ve changed to

vec3 ws_viewer_position = -vec3(mv_matrix[3]);

Still, the specular highlight is weird. Help?

screenshot

Is it wrong that I’m doing everything in world space? I’ve seen many tutorials doing the lighting in eye space, but then they have to transform the normals into eye space as well. Maybe lighting is better to do in eye space? I’ve improved the fragment shader so now the halfway phong is also supported:


#version 120

// viewing transformations
uniform mat4 mv_matrix;

// lights
uniform vec3 ws_light_position;
uniform vec3 light_color;

// specular values
uniform float specular_coefficient;
uniform vec3 specular_color;
uniform float specular_n;

// attributes
varying vec3 diffuse_color_;
varying vec3 ws_vertex_position_;

void main()
{
  vec3 ws_viewer_position = -vec3(mv_matrix[3]);

  vec3 ws_N = normalize(ws_vertex_position_);
  vec3 ws_L = normalize(ws_light_position - ws_vertex_position_);
  vec3 ws_V = normalize(ws_viewer_position - ws_vertex_position_);
  vec3 ws_R = 2 * dot(ws_L, ws_N) * ws_N - ws_L;
//vec3 ws_R = normalize(ws_L + ws_V);

  float dpa = dot(ws_R, ws_V);
//float dpa = dot(ws_R, ws_N);

  gl_FragColor = (dpa > 0) ? vec4(diffuse_color_ +
    light_color * specular_coefficient * specular_color *
    pow(dpa, specular_n), 1) : vec4(diffuse_color_, 1);
}

It’s not wrong. Choosing one space over another is just a matter of efficiency and convenience for your computations given how “you” decide to implement lighting. Given infinite precision, there is no difference in the result between using one orthonormal space over another.

Yes, if you light in eye-space, you have to transform the vertex position and normal to eye space. But you can have the light position and direction vector pre-transformed to eye-space on the CPU once per frame, merely pass those in as uniforms, and use them as-is without having to transform them at all. Another nice thing about lighting in eye space is that the viewer is at the origin, so vector calculations that use the eye position or direction (e.g. specular) are simpler and don’t involve a subtract of two points. And you don’t need to pass in the eye position and/or look direction, because it’s implicit.

If you light in world space, you still need to transform the vertex position and normal to world space (in most non-trivial apps, object space != world space). And you can pre-transform the light position/direction into world space on the CPU once per frame just as in the previous. But the problem is in many apps, “world space” is just “too big” to represent with floats (requiring doubles or some other method), so you end up not being able to light in world space in the shader without horrible lighting artifacts due to running out of precision. Further, “world space” is not a concept explicitly exposed by OpenGL. MODELVIEW takes you directly from object-space to eye-space. There is no stopping-off point for world space. True, you can pass in another matrix to take object-to-world (i.e. a MODELING matrix by itself). But you otherwise wouldn’t need it. Also, you need to know how to get to eye space anyway (i.e. eye position and/or look-vector) to do things like specular and fog, so you gotta pass those in (whereas in eye-space, they’re implicit in the space so you don’t).

But no, you can light in any space you want.

direction vector

By this you probably mean the light direction vector for directional lights?

Thanks for the exhaustive reply. The -mv_matrix[3] thing I did in the shader is not correct (due to the mv_matrix being a product of R*T), so this had me a little confused.

Are there any tuts as to how to overcome the world space float problem?

Directional, or positional lights with a light cone. With the former, there is only a direction vector. But for positional lights you have a position and a cone axis vector.

Are there any tuts as to how to overcome the world space float problem?

There are some refs out there. Just don’t use “world space” in your shader. Then you’re free to use doubles or whatever for interim MODELVIEW computations on the CPU. If you need more, just ask.

Is there an easy way to combine shaders on the CPU (in an #include-like fashion) to use “libraries” for linear algebra, quaternions etc…?

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.