I was just reading through a vertex program lab from NVIDIA SDK and it says that i need to Transform the (light) normal to eye-space, which makes sense. BUT, the vertex code to do this transforms by the inverse modelview transpose… could someone please explain the theory behind this to me please?
And look at:
Appendix F - Homogeneous Coordinates
Then find the “Transforming Normals” bit.
There are some maths there that lead to the following conclusion:
“vectors are transformed by the inverse transpose of the transformation that transforms points”