I am capturing data from my OpenGL applications by listening to function calls on the other side of the OpenGL library. The goal is to generate 3D model files (ie. .obj) that represent the rendered geometry in eye/view coordinates. that is, after the Modelview transformation, but before projection or perspective.
By applying the current 4x4 ModelView matrix to all vertices, I am able to reconstitute my geometries no problem. But now I am trying to add in the vertex normals to my output, and have a question:
does the entire 4x4 ModelView matrix get applied to the initial [u,v,w] values for normals to produce the final [u,v,w] values? it would seem that since normals are only directional, you might only want to apply the Rotation (and possibly Scale) components of the ModelView matrix to the normals.
Normals aren’t transformed by the modelview matrix, but by the transpose of the inverse of the modelview matrix. For rotations only, they are the same, but not when translation, shearing and scaling and so are introduced.
Anyway, normals are transformed by the entire 4x4 matrix, just like any other vertex. Normals have the fourth coordinate set to zero, so any translation in the matrix will in practice not affect the normal. Also be aware that any non-zero component in the fourth row in the matrix (typically present in a projection matrix) will also affect the normal, so using the top left submatrix is not enough if you want the general case.
so, in short, i should get the correctly transformed normal values in eye/view coordinate space if I:
make sure the 4th value of the normal vector is zero
multiply it by the transpose of the inverse of the modelview matrix
It may be neccesary to re-normalize the normal afterwards to make sure it is unit-length.