The grayscale-to-normal-map is simpler than the proper normal baking from a highly detailed model.

Basically it is rouhgly like a edge detection filter.

For each source texel, find the difference with next horizontal texel :

- no difference : means pointing straight up
- 255 units difference : maximum direction to the left
- -255 : max direction to the right

So to sum up :

normal.x = sin(arctan(-difference.x/texelscale))

Same, with vertical direction :

normal.y = sin(arctan(-difference.y/texelscale))

And of course, the z part, to make the normal vector unit length :

normal.z = sqrt ( 1.0 - normal.x**2 - normal.y**2)

Doing a normal-map from highres 3d mesh is more involved, one as to cast parallel rays from each low res triangle to the nearest highres polygons, retrieve the normal, then store it on the normal texture. I admit I don’t much about how to do that in practice.