First, to get the light into object space, you transform the light by the inverse of the normal transform matrix (which would take an object-space normal into light (world) space).

As the normal matrix is the transpose of the inverse of the position matrix, to be fully correct, you have to invert that again. However, if your animation doesn’t use scale or shear, the normal matrix is just the position matrix with elements 12/13/14 zeroed out, so the inverse is just the transpose of that.

Regarding generating normal maps, it’s really no harder generating those in object space than generating them in tangent space. Of course, it IS harder to generate these than to just take some pre-baked map and slap it on an existing mesh. I’m aware of two ways of generating normal maps:

a) Take a heighfield-style bump map, and run a highpass filter on it to generate the normal map. The local differential in the bump map must be applied to the direction of the normal, which means that you have to know how the texture is mapped onto the object. Either this is implicit (for tangent space maps) or you have to examine the geometry to figure out which direction the normal, the S and the T coordinates point.

b) Take a low-poly version of your mesh and a high-poly version of your mesh. Shoot rays out from the low-poly version and find the closest intersection with the high-poly mesh. You can look at the distance traveled and generate a height map that way, and then run option a), or you can just look at the normal of the high-poly object at the point where the ray finds it. ATI has some nice tools and demos for this technique.

If by changing the normal map when the mesh changes you mean that, if you in your modeler change your mesh, you have to re-calculate the normal map, then that is correct. That’s usually part of your export/compile/prepare/package tool chain. Tools, in general, are, IMO, the biggest part of creating a new engine these days.