I am a final year student studying a degree in computing Visualisation. For my final year project I am writing a program in C to model 3D images of faces and then morph between different faces. I am unsure as to methods I could use to group vertices, for example to ensure eyes morph with eyes. I have scanned data from using a projector and a camera and so noise is also an issue. I would be very grateful if anyone can offer any kind of advice.
With 3D image I assume you’ve captured both a 2D texture and a 3D mesh, and already mapped the 2D texture to the 3D mesh correctly.
Easiest would be to have a human tag certain areas (corners of eyes, nostrils, corners of mouth, lip extents, etc). Then your program can interpolate between these areas to match up the rest of the meshes.
If it needs to be automatic, start looking for easily recognizable contrasts in known areas of the face that seem to correspond to normal proportions. Very important to have images that are all taken with similar lighting/scaling/positioning in this case.
If you want arbitrary-image face feature recognition and morphing, then this is an Active Research Area With Room For Improvement