Inversion of the model-view matrix

Hey All!

I am attempting to take the inverse of the model-view matrix. The example that I have to work with makes calls to glTranslate() and glRotate(), and I understand this pretty well.

However, the only calls that I have made to my model-view matrix in my program that I think would have had an effect on it are gluPerspective() and gluLookAt(). (I am not using any transformations or rotations).

If I wanted to take the inverse of this model-view matrix would I simply need to apply gluPerspective() and gluLookAt() in the reverse order? Or is my model-view matrix still the identity?


hhmm…unless you have other reasons for doing it, the gluPerspective call is usually applied to the perspective matrix.

But anyway, the inverse of a matrix is not obtained by simply reversing the matrix multiplications that got you the matrix. Think of 3 1x1 matrices: ([1] [1] [3]) which would yield a composite matrix [3]. the inverse of this matrix would be [1/3] which isn’t equal to [3][1][1]. With a little effort you can probably obtain some code to invert a matrix for you.

Ok, that makes more sense,

I forgot that there was a Perspective matrix. I’ll look online for some code to invert my marix.



Just in case you weren’t aware:

the modelview matrix is responsible for transforming your vertices from object space to world space (transforming them to where you’d like them in your scene).

The perspective matrix is responsible for transforming the world-transformed vertices to screen space (or in other words, projecting your vertices from world space onto the 2D screen).

so, a vertex is first multiplied by the modelview matrix in order to get it into world space, this new vertex is then multiplied by the perspective matrix to determine its final resting place on the screen.


There’s a simple way to invert matrices without crunching digits. There’s a nice law in linear algebra that says that the inverse of a product is equal to the reverse product of the inverses. That is, if

M = A B C D


M^-1 = (ABCD)^-1 = D^-1 C^-1 B^-1 A^-1

If your matrices are simple translations, rotations and scaling, then you can invert each of these matrices independently of the others.

Translation: T^-1 = -T
Uniform scaling: S^-1 = 1/s
Rotation: R^-1 = R^T

I think that’s where you got the notion of a “reverse” order. I like to think of an inverse as a reverse to, or an “undo” matrix.


So I have no translations, rotations or scaling that is done to my modelview matrix. So I don’t need to worry about inverting any of those (at the moment!).

The only call that I make is to gluLookAt() so would I just need to find the inverse of the gluLookAt() matrix? Or would my modelview matrix still be the identity since I have not done any translations, rotations or scaling?

I’m just having trouble getting my head around this as I’m not really sure what the glulookAt() function actually does to the modelview matrix. I’m assuming that I would need to invert this so that I would transform my modelview matrix back to the origin.

Let me know if I am on the right track!

First, have a good look at the section on viewing in the redbook. It’s important to have a firm grasp of the camera.

Second, any questions you may have about how certain utility functions work can be answered by simply looking at the sample implementation that SGI provides (MESA is another alternative).

In a nutshell, it doesn’t make much difference if you call LookAt or Translatef/Rotatef; it’s simply a matrix in the end. You can invert this matrix using the reverse order law, or with a brute force inversion – it’s up to you. There are lots of ways to think about transformations. The better you understand them, the more fun you’ll have :slight_smile: I’ve been playing with them a long time, and I’m still trying to find new ways to think about them.

Here’s a code snippet that demonstrates the idea. This assumes translation and rotation only (with LookAt, that’s a safe assumption).

void invertMat(float m[16], float i[16]) {

Vector t(m[12],m[13],m[14]);
Vector o=Vector(dot(Vector(m[0],m[1],m[2]),t), 

i[0] =m[0]; i[1] =m[4]; i[2] =m[8]; i[3] =0;
i[4] =m[1]; i[5] =m[5]; i[6] =m[9]; i[7] =0;
i[8] =m[2]; i[9] =m[6]; i[10]=m[10];i[11]=0;
i[12]=-o.x; i[13]=-o.y; i[14]=-o.z; i[15]=1;