in the os x implemntation of glut, there are all these nice matrix macro operations I’d like to use. But it uses a float[4][4] matrix versus a float[16].

float temp[16];
glGetFloatv( GL_MODELVIEW_MATRIX, temp );

Is there a more compatible way with the vvector.h stuff? Or is the pilot suppose to transform from float[16] to float[4][4]? Or did I miss the point? Seems like a pain.

Assuming that each float[4] contains a column of the matrix, both types are actually compatible. You could just use float temp[4][4] in you program and and pass &temp[0][0], temp[0] or (float*)temp to glGetFloatv, whichever you like best.

If each float[4] contains a row of the matrix, you have to transpose each matrix received from/passed to OpenGL.