These 2 shaders are giving me very strange results. They are supposed to implement Phong shading. Please help (currently I am shading a sphere, so I calculate the normals in the shader).
Is it wrong that I’m doing everything in world space? I’ve seen many tutorials doing the lighting in eye space, but then they have to transform the normals into eye space as well. Maybe lighting is better to do in eye space? I’ve improved the fragment shader so now the halfway phong is also supported:
It’s not wrong. Choosing one space over another is just a matter of efficiency and convenience for your computations given how “you” decide to implement lighting. Given infinite precision, there is no difference in the result between using one orthonormal space over another.
Yes, if you light in eye-space, you have to transform the vertex position and normal to eye space. But you can have the light position and direction vector pre-transformed to eye-space on the CPU once per frame, merely pass those in as uniforms, and use them as-is without having to transform them at all. Another nice thing about lighting in eye space is that the viewer is at the origin, so vector calculations that use the eye position or direction (e.g. specular) are simpler and don’t involve a subtract of two points. And you don’t need to pass in the eye position and/or look direction, because it’s implicit.
If you light in world space, you still need to transform the vertex position and normal to world space (in most non-trivial apps, object space != world space). And you can pre-transform the light position/direction into world space on the CPU once per frame just as in the previous. But the problem is in many apps, “world space” is just “too big” to represent with floats (requiring doubles or some other method), so you end up not being able to light in world space in the shader without horrible lighting artifacts due to running out of precision. Further, “world space” is not a concept explicitly exposed by OpenGL. MODELVIEW takes you directly from object-space to eye-space. There is no stopping-off point for world space. True, you can pass in another matrix to take object-to-world (i.e. a MODELING matrix by itself). But you otherwise wouldn’t need it. Also, you need to know how to get to eye space anyway (i.e. eye position and/or look-vector) to do things like specular and fog, so you gotta pass those in (whereas in eye-space, they’re implicit in the space so you don’t).
By this you probably mean the light direction vector for directional lights?
Thanks for the exhaustive reply. The -mv_matrix[3] thing I did in the shader is not correct (due to the mv_matrix being a product of R*T), so this had me a little confused.
Are there any tuts as to how to overcome the world space float problem?
Directional, or positional lights with a light cone. With the former, there is only a direction vector. But for positional lights you have a position and a cone axis vector.
Are there any tuts as to how to overcome the world space float problem?
There are some refs out there. Just don’t use “world space” in your shader. Then you’re free to use doubles or whatever for interim MODELVIEW computations on the CPU. If you need more, just ask.