I’m trying to track a point that is at the end of several transformations. EG. A human hand at the end of the arm.

By looking at the modelview matrix after the transformations to that point, I get coordinates that are somehow related to the camera. How can I modify these coordinates so they are relative to (0,0,0) in the world? So I can start my next frame at 0,0,0 and move straight to where the hand was in the previous frame and draw a ball, then draw the rest of the scene.

I want to translate modelview matrix coordinates into world coordinates, can anyone help?

I’ve tried taking a vector from the camera to the world 0,0,0 (before any transformations) and then taking a vector from the camera to the tracked point, both out of the modelview matrix, and then subtracting them to get a vector from world 0,0,0 to the tracked point. It works when the camera isn’t tilted, but as soon as the camera animations kick in it all turns pear shaped.

I also tried to throw a heap of trig. at the problem and calculate the tilt of the camera in each dimension and use it to adjust the modelview matrix coordinates to world coordinates, but that had similiar results. It worked until I started tilting the tracked object, then the tracking coordinates blew up into huge values.

Is the solution a combination of both efforts, or is it something else all together?

To convert camera corrds. into world coords. you must multiply the coords. by the inverse of the camera rotation matrix(the rotation part of the modelview matrix in your case).You can use the transpose instead of the inverse matrix since they are identical in the case of rotation matrices.Hope that helps.

I’ve tried multiplying by the transpose rotational part of the MV Matrix.

Let me try and explain that more fully…

I take MVMatrix[12] -> [14] (X, Y, Z) and multiply them by the transpose rotation
coordinates, I get code that looks like this …

X = MVMatrix[0]*MVMatrix[12] + MVMatrix[4]*MVMatrix[13] + MVMatrix[8]*MVMatrix[14] + MVMatrix[12]*MVMatrix[15];
Y = MVMatrix[1]*MVMatrix[12] + MVMatrix[5]*MVMatrix[13] + MVMatrix[9]*MVMatrix[14] + MVMatrix[13]*MVMatrix[15];
Z = MVMatrix[2]*MVMatrix[12] + MVMatrix[6]*MVMatrix[13] + MVMatrix[10]*MVMatrix[14] + MVMatrix[14]*MVMatrix[15];

Right? Well it doesn’t work…

The closest I can get to tracking the point is to take the 12->14 part of the MVMatrix when I know that I’m at (0,0,0) and then again where I want to track. Then use vector subtraction to get the vector between, I really thought that this approach would work, but it doesn’t either. It works when the camera is not tilted, but as soon as I start moving the camera around it looses
accuracy. Thing is it’s still pretty close, and it’s the closest I’ve come to actually
determining the World X,Y,Z from the MVMatrix.

I’ve also tried taking a vector from the camera to it’s target (Im using gluPerspective) and then calculating the angles from it (using the piece of quake2 source recently featured in this forum) and then rotating the MVMatrix I have by those angles. I thought if I then took the MVMatrix[12] -> [14] they would be world coordinates, perhaps relative to the camera, perhaps to 0,0,0. This approach failed as well.

One other Idea I’ve had is to get the unmatrix.c from GraphicsGems and try using it, but I have no clue how it works, so I’m reluctant to use it in my code.

Any Ideas? I’m in desperate need of help on this one.

Hmmm… the 12-14 matrix values from the model matrix give the XYZ, right? 'Cept that if you use a projection matrix for your camera, it’s no longer accurate, right?

Why not mulitply your projection matrix with your model matrix, reset your projection matrix, and then place your object and take your XYZ’s then pop back to your original state.

I think everybody missed the point of the question.

From what I can tell, he’s asking how to transfrom a point in a hierarchic model into world space.

Basically, what you have to do is build the matrix stack up to the part of the model that has the point you want to transform. This stack must not have the camera matrix in it at the moment (don’t call gluLookAt yet). After building the stack, get the current modelview matrix and use a simple matrix-vector transform to get the world-space position of that point.

An example. If you want to find the world-space position of the (0, 0, 0) in the space of the hand (which should be the center of the hand), then build the matrix stack up to that hand, get the current modelview matrix, and transform (0, 0, 0) by it.

That’s right Korval, that’s what I’m trying to do. I should have explained more clearly I guess.

What I want is to track the motion of the hand and leave a slightly transparent trail wherever it goes. It almost works at the moment, but it’s obvious something is not right.

So what you’re saying is that I should load the ModelView Matrix Identity, build up to the hand, grab the ModelView Matrix and those values should be world coordinates?

Then I would draw the rest of the scene. When I’m done call gluLookAt? Then swap the buffers to display that frame?

I thought gluLookAt had to be done first?

Thanks to everyone for helping out, I’ll give this a shot and let you all know how it goes…

I don’t think you can apply the gluLookAt after you’ve built up the scene …

My only choice from here is to build the scene up to the point I want to track, on top of the Identity matrices for ModelView & Projection, get the coordinates, and then build the scene all over again to display it.
That’s far too much computation, but It’s
my last chance unless someone has a better
idea?