Accurate Matrix Inverse

If you have some accurate (and reliable) non-affine matrix inverse code, could you post it here?
I’m currently using the gaussian reduction method, which is mostly OK if you use double precision floats, but tends to introduce some significant inaccuracies.

I assume you’ve tried this one:

Non-affine? Does that mean non-orthagonal?

i don’t know how your matrix is and this fact should be mentionned :

  • if the matrix is not very big, you can write the real inverse using det…
    But if you needn’t the inverse but only a solution of A.X=B and if your matrix is “symetric” (and better definied >=0) then use the fast gradient algorythm (or sh… my english is to bad, see
    else the reduction used (gauss, LR, Schmidt…) depends on the form of the matrix and what you want to do with this one. Hope this helps a little.

An affine matrix only translates, scales and rotates. A non-affine matrix can also skew.

You mean preserves angles. I thought scale was not including since an affine transform also suppose to preserve dimensions.

Bad memory!

I have been using the inverse funcion present in the GLU source code. Does anyone know what method it uses?


My understanding of the meaning of affine comes from so it’s quite possibly wrong.

Hmm… (search, search, search) the GLU source appears to use the Gaussian reduction method.

Having now tried almost every bit of source code I’ve found to perform the inversion, I have concluded that they are all approximately equal when it comes to accuracy, and the Gaussian method is fastest.

It seems that the inaccuracy is introduced somewhere else in my code. Back to debugging…

For small nonsingular matrices the fastest way and most accurate is the method with the least floating point operations. For a 4x4, direct solution, via Cramer’s rule or some other method, is the fastest. If the matrix is symmetric only the diagonal and either the above or below diagonal terms need to be calculated. Nearly singular matrices require special and computationally expensive matrix solvers such as the HouseHolder algorithm. Generally using double precision is the first resort because it easy, computationally cheap and maybe be required anyway. If you have a nearly singular rotation or translation matrix, it is most likely you have a mistake or can scale the model to remove the problem. If you really, really need a special solver, send me an e-mail.

Here’s a more trustworthy definition :

Is this the gaussian method?

inline void s3dMatrixInvert16f(S3Dfloat* m)
	S3Dmat16f temp;





	memcpy(m, temp, sizeof(S3Dfloat)*16);

I believe I got this method from NitroGLs bumpmapping code.

No, WhatEver, that is not the Gaussian algorithm.

Don’t Disturb, are you using row pivoting? If you are inverting 4x4 matrices with Gaussian elimination with row pivoting (the most common method) then you should not be getting bad roundoff errors even with single precision floats. Unless your matrices are nearly singular (or poorly scaled or very large), there is probably a bug in your code.

NitroGL’s 4x4 Matrix Inverse may have been arrived at using Gaussian elimination, but is not, as pointed out, the Gaussian elimination algorithm. It was most likely arrived at using Cramer’s rule exploiting the fact that the homogenous coordinate’s transformation matrix has a row and/or column that is mostly zeros. This method is the fastest (faster than Gaussian elimination) but, one of the problems with using someone else’s code is that you are not sure of what assumptions were made to begin with unless it is very well documented.

I think that what the OP and many of you want is an inverse of a 4x4 homogenous coordinate’s transformation matrix regardless of whether the matrix is affine or not. However it may help to think of an “affine” matrix as just another transformation matrix with special properties that preserves lines and parallelism.

Double precision should used before implementing a partial pivoting strategy such as row pivoting because in some situations partial pivoting can also produce inaccurate solutions and because certain matrices, such as diagonally dominant and positive definite matrices don’t require pivoting at all. Because the fpu in most computers do arithmetic internally in double precision, the use of double precision is almost free, except for a few extra memory transfers and memory storage. The memory storage for a 4x4 matrix inverse is negligible.