That was supposed to be making per vertex lighting if later on the fragment shader…

gl_FragColor = gl_Color * diffuse_value;

Then when he moved the first line - appropriately (by outputting vertex_normal and vertex_light_position to fragment) - to the the fragment shader, it is supposed to be transforming the method to “per pixel shading”.

How is that so? The first method appears to be doing the diffuse_value calculation every pixel anyway!

First version performs the lighting calculation on each vertex and interpolates the resulting diffuse_value, i.e. a color is interpolated.
Second version interpolates the normal and performs the lighting calculation for each pixel.

How is that so? The first method appears to be doing the diffuse_value calculation every pixel anyway!

no, diffuse_value is only calculated on each vertex and then interpolated - that is different from calculating the value for each pixel from an interpolated normal (which is what the second version does).

Thanks very much, that interpolation difference makes it clear.

Since both interpolate I don’t see how they would produce different results; e.g. I usually read ‘per pixel is superior’ but here they both interpolate, none is ‘absolutely right’ and the other ‘interpolates and hence is blunt’.

EDIT: …but looking into it, I heard it requires to normalize the inputs…

tobindax, what matters is what is linearly interpolated.

Example :

y = a*x + b

compute y0 and y1 given x0 and x1 : then you can linearly interpolate between y0 and y1, as if it was the non-interpolated function. This is the vertex shader.

Now with :

z = y*y

this is no more possible to have a nice approximate of z with just a few values of y : you will need to compute z for almost each pixel, or the curve will not feel right. This is the fragment shader.

To recap, doing both 1) and 2) in the vertex shader will reveal more artifacts than doing 1) in the vertex shader and 2) in the fragment shader.

In reality, it would be even better to have per-pixel normals (no interpolation at all) : that is the idea behind normal mapping. http://en.wikipedia.org/wiki/Normal_mapping

EDIT : yes, linear interpolation between 2 unit length vectors gives a vector smaller than unit lenght, so normalization is needed after interpolation.

Right now I get the same solid color on the whole of each triangle (it changes but only in whole). I guess per pixel should have differences in the length of the triangle.

main code involved:

out vec3 vertex_normal;
out vec3 vertex_light_position;

… on the vertex shader.

with

vertex_normal = normalize(NormalMatrix * in_Normal);
// old gl_NormalMatrix: "transpose of the inverse of the upper
// leftmost 3x3 of gl_ModelViewMatrix"
mat3 NormalMatrix = transpose(
inverse(
mat3(
ModelViewMatrix
)
)
);