i have this problem:
after several pushes/pops/translates/rotates i get the modelview matrix using glGetFloatv() and store it in a 4x4 GLfloat matrix. how can i inverse this matrix? (i know opengl is not a math api, and i also know i can invert it using maths, ie gauss method. but is there any other way (faster), because i want this calculation to take place several times during a rendering cycle?).
to better understand the problem:
i need to transform the light using the inverse modelview matrix (that’s what the trouble is all about, the inversion). then i use glLoadIdentity() and after that i load the modelview matrix (not inverted) and draw an object. since i have several transform commands, i started working on a weird (and maybe stupid) solution: i kept track of the transform commands (for example, after a glPop i inserted a number in a matrix (i call it command matrix), after a glTranslate i insterted another number in the same matrix - in order to “describe” the transformations). then, when i wanted the inverse matrix, i executed the commands “described” in the commands’ matrix one by one (negated and in the revese order). but the problem was that i needed to draw many objects and somewhere i lost it with many pushes/pops.
so, i decided to abandon this solution and try to explicitly inverse the modelview matrix myself.
somewhere in the forum i read that if you discard the translations from the modelview matrix, you can easily invert it. i wonder how this can happen since after discarding the translations (keeping only the rotations) the matrix is still not easy to invert.
any help would be greatly appreciated.
thanx in advance for your time
If you discard any transforms made on the object, don’t you end up with a basic identity matrix? In which case it should be it’s own inverse. Also there is another forumla other than the gausian elimination. Say you are trying to find the inverse of matrix A. Then the forumla is (1/det(A)) * (the transpose of the cofactor matrix of A).
Look at the site above for a better explanation. The Inverse by Determinants is what I’m talking about. It’s a straight clean cut formula. The Gausian Elimination is shown after the cofactor method.
halcyon, i must insist that the gauss elimination is quicker than the calculation involving the adjoint of a matrix. in other words, we have a 4x4 square matrix M. we calculate det(M) by calculating 4 3x3 dets.
for every 3x3 det we calculate 3 2x2 dets. after that we need the adj(M) matrix which is 4x4. for every element of this matrix we need to calculate a 3x3 det and multiply it by 1^(ixj), where i and j describe the elemant’s position in the matrix. please, tell me if i am mistaken. with some math tricks (and if the matrix is “kind” enough to “accept” these tricks) we can avoid some of the det calculations. but the problem still exists since these calculations take place several times during a rendering cycle.
so, (again, if i am not mistaken) i think the gauss method is faster. but since this is not a math forum, i would like to ask if there is another way (for instance, if you can get this matrix from opengl, since opengl uses this inverse matrix for lighting calculations etc) or a quicker way due to the nature of the modelview matrix.
also, when i said about discarding information from the modelview matrix, i said discard only the translation (not all of the transformations). the rotations are still there. the point is to invert the matrix which contains only the rotations, after that you can apply the translations negated and you end up with the original matrix inverted…
unfortunately, for me this is a dead end…
thanx again for reading this
[This message has been edited by kawasakis900 (edited 02-05-2003).]
Row reduction is typically what I use for calculating inverse of a matrix. And, if you are only using it for a 3x3 or a 4x4, you should be able to optimize the code as opposed to a function that can take any nxn matrix.
From what I have seen, there is not an OpenGL function to perform inverting matrices. Better to write a row reduction function and be done with it.
I actually wrote one after reading your post and it took less than a half hour including debugging. Time for 1 calculation for 3x3 matrix average 4.2E-6 seconds (15 counts at 3579545 counts/second) for a 4x4 matrix average 5.9E-6 (21 counts at 3579545 counts/second). Not sure if this is too large for your purposes, but unless you are performing hundreds or thousands of these calculations it should be acceptable.
i don’t know what row reduction is. but it is not cpu-hungry and is very suitable for me. could you, please, post some code (pseudo or whatever) or a URL to look for it?
Row reduction is the same thing as gaussian elimination.
Actually, it looks as though Gauss elimination is similar but not the exact same as row elimination for determination of matrix inverse. Only difference really is that you are placing identity matrix to the right of matrix to be inverted at start instead of the scalar portion of a system of equations. But works the same.
I do not like to just give code away, because it doesn’t allow a person the opportunity to implement it themselves and learn from it. And, someone may find a more efficient method doing it themselves, and learn more. At the same time, it is probably easier to write more efficient code looking at someone else’s.
[ul][li]Initialize inverse matrix as identity in start of function. 2 outer for loops say i and j. End loops.Start at upper left. Work down columns row by row then move to next column row by row. 2 outerloops again i and j. You only work the nxn matrix to obtain divisor.if i == j divide entire row (matrix and inverse) by (divisor =) the value at current matrix[i][j]. Verify that matrix[i][j] != 0 else return FALSE or inverse = NULL.else divisor = value at current matrix[i][j] / matrix[j][j]. Verify that matrix[j][j] != 0.0 else return with inverse = NULL or return FALSE. Use k as an inner loop for row reduction. matrix[i][k] -= matrix[j][k] * divisor (or divisor could be the reciprocol and divide by divisor which would maintain the variable’s self explanatory name). inverse[i][k] -= inverse[j][k] * divisor. End inner loop k If values of diagonal checked within the loops as described above of matrix are zero then matrix is singular (noninvertible). The matrix and inverse variables should be references to external arrays or pointers to nxn contiguous memory. The matrix passed to this function will be identity after, and inverse will be the inverse of matrix. If you need the matrix you should save it prior to calling function. You should check the function return value or if inverse == NULL depending on how you set it up before using the inverse matrix after function call.[*] Should only be 25 to 30 lines of code.[/ul][/li]
Hope it helps. If you have trouble understanding my longwinded pseudo explanation, let me know.
Also, I am running AMD Athlon XP1800+, ASUS A7V333 mobo, 512MB PC2100 DDR, Win2k. So, depending on your hardware, and your system, your timing results may differ quite a bit faster or slower.
Modelview matrix, if used correctly, is an affine matrix. You can get the inverse matrix simply by transposing.
I did not know that. I’ve heard the term but was not certain of what it meant. I assumed with all of the transformations in addition to projection and viewport transformations that it could never be that simple.
Just out of curiosity, could you explain affine matrices or point me to an explanation.
Actually, I’m not so sure what the definition of affine matrix is.
Affine transormations are translate, rotate, scale and shear. Those are the ones representable with a 4*3 matrix, so I guess an affine matrix is one with 0 0 0 1 on the bottom row. This is not true for projection, but it isn’t usually stored in a modelview matrix anyway.
I found this link, it contains the code to invert an affine matrix. http://www.cs.unc.edu/~gotz/code/affinverse.html
So, basically an affine matrix has the property of:
transpose(A) = inverse(A)
except for the translation and is somehwat symmetric (not really the word I was looking for, perhaps weighted +/-) about the diagonal.
Sounds good. Learn something new everyday.
Wrong. An orthogonal matrix has transpose = inverse. An affine transform is not, in general orthogonal. Think about the case of pure scaling.
If the matrix represents only rotation it will be orthogonal.
I hope it can help u.
Originally posted by gumby:
Wrong. An orthogonal matrix has transpose = inverse. An affine transform is not, in general orthogonal.
Oops. Guess I shouldn’t be going around, shining with my math skills when I don’t have any Well, point made, it’s still a lot easier than gaussian elimination and stuff.
i think a modelview matrix is orthogonal if there are no translations and scalings/skews (that means only rotations). so when you want to invert it you transpose it (i know guys, my english sucks).
but, to get the problem from the start, i tried something else. instead of inverting the modelview matrix myself, i took a dummy vertex at position (0, 0, 0) and “applied” the current modelview matrix to it. then, if i wanted to transform something with the inverse of the modelview matrix, i added
(-x, -y, -z) of the dummy vertex i got to its coordinates. well, i don’t know if this is the right solution to the problem.
also, i want to thank everyone for giving an
answer. the links were very good
if you have only rotation tranformations you might know that a rotation matrix Rx, Ry, Rz is ortogonal and thus can be inverted by transpose it