I don’t believe there is a standard “multiplication order” in the modelview matrix, broken into viewing and modeling matricies (V & M), but if there is does it go like:

modelview matrix = VM

or

modelview matrix = MV

I don’t believe there is a standard “multiplication order” in the modelview matrix, broken into viewing and modeling matricies (V & M), but if there is does it go like:

modelview matrix = VM

or

modelview matrix = MV

think like that bcoz opengl doesnt separet viewing and modeling matrix .

if u do :

glTranslatef(…) // will generate matrix M1

glRotatef(…) // will generate matrix M2

draw(vertex V)//

ur vertex V will be multiplied by M2 than by M1 . the result is : M1*(M2*V)
if u want to separet the view and model transformations :
//setup camera : generate matrix V
//position ur model : generate matrix M
//draw vertex X
res=V*(M*X).

If you are using shaders, then you are far better off supplying your own set of matricies rather than relying upon the builtin GLSL matricies for modelview and movelviewprojection.

The reason is that to perform you own lighting calculations you may have to ‘undo’ the effect of the camera - i.e. multiply by the inverse modelview matrix. To avoid this kind of nonsense, it’s better not to have multiplies by the camera in the first place - something you can’t do with the builtin modelview matrix.

Anyway, back to your question: the order

mat4 modelview = cameraviewmatrix * modelmatrix;

mat4 modelviewproj = projectionmatrix * modelview;

gl_Position = modelview * gl_Vertex;

So in fixed function terms, the ModelView matrix is calculated as:

MV = V * M

and ModelViewProjection is:

MVP = P * V * M

Therefore every vertex sent to the Fixed function or a shader is transformed by:

gl_Position = V * M * Vertex;

Thanks for both of your replies.

If I may ask, is there documentation that shows what you guys have told me; that the modelview transformation is in fact calculated as V*M? I can’t find that in the red book or the OpenGL Superbible, though maybe I’m not looking at the right spots.

More generally, I’d like to learn as much as possible about the transformations because in the code I’m maintaining (uncommented, unfortunately) the authors do a lot of this kind of stuff (example below)

//////////////////////////////////

glLoadIdentity();

glMultMatrixf(model->node->matrix);

glTranslatef(-(model->node->pivotPoint[0]),-(model->node->pivotPoint[1]),-(model->node->pivotPoint[2]));

Matrix newMatrix;

InvertMatrix(ModelViewMatrix, newMatrix);

glMultMatrixf(newMatrix);

glGetFloatv(GL_MODELVIEW_MATRIX, model->node->mesh->matrix);

///////////////////////////////////////////

In other words, lots of glMultMatrixfs and inverting of matrices, some translation mixed in, all that. I sort of understand what the authors are doing but I don’t really have a firm handle on it, and am hunting far and wide for more explanations.

the first lines :

glMultMatrixf(model->node->matrix);

glTranslatef(-(model->node->pivotPoint[0]),-(model->node->pivotPoint[1]),-(model->node->pivotPoint[2]));

the model is positionned arround a pivot point .

the model is first translated from the pivotpoint than positioned( like a camera rotating arround the pivot).

he is supposing that the global coordinate system is fixed on pivotpoint and ur model is transformed arround that point.

the second two lines :

InvertMatrix(ModelViewMatrix, newMatrix);

glMultMatrixf(newMatrix);

can u show what is in the ModelViewMatrix initialy.

for more details about viewing c the redbook :

http://fly.cc.fer.hr/~unreal/theredbook/chapter03.html

Almost all the OpenGL reference guides take the reader through the process of model transformations, so take your pick, eg the OpenGl 2.1 reference guide.

For the most part, it’s all completely unnecessary and mostly useless in the shader world…because you define exactly what you want to do in the shaders yourself.

It’s so hard to understand what other people are doing because they are constantly trying to work around the fixed function pipeline or the concatenated modelview matrix. As I said before, if you track your own matricies and use GLSL the whole thing is much simpler because there is no confusion and no guess work.

In my own code, I set the camera starting with identity and then introduce the rotations for the view. Therefore, the camera only ever contains the view part of the modelview for OpenGL. When I render any model, I supply a matrix for that model - often the model has no rotations and only a translation. No matter what rotations or translations, the end result is a model matrix. To render the object, I just pass the projection,model and view matricies to GLSL and that’s it- no guessing and no inverse modelview matrix is EVER required!

The initial modelview matrix is the identity-- it’s set at the top of my code with a call to glLoadIdentity().

Thanks for the link to the red book. I’m still confused about what the matrix inversion is doing in my original code snippet. Part of the confusion has to do with the duality of the modelview transformation-- in your explanation you’re using the notion mentioned in the red book about a “global” fixed coordinate system with all transformations taking place relative to it. When you think that way you visualize the transformations taking place in reverse order from the API calls.

The authors also say you can visualize the modelview transformation in another way, as a series of shifts in a local coordinate system. That makes more sense to me, since then the transformations take place in the same order as the API calls, rather than the reverse, as you mentioned.

But it’s still hard to visualize what results after this call sequence:

- glLoadIdentity()
- glMultMatrix(node->matrix)
- glTranslate(-(node->pivotPoint))
- InvertMatrix(node->-mesh->matrix), set the result in matrix M
- glMultMatrix(M);

What the modelview matrix looks like after those five steps and what it represents I can’t picture. What does inverting the mesh matrix, then multiplying it by the modelview matrix, do? Does it result in a modelview matrix that transforms mesh coordinates to node coordinates? (I should mention that meshes are contained in nodes.)

if I suppose that node->matrix and node->pivotPoint represent ur view matrix ( named : V)

and the node->mesh->matrix represent ur mesh model matrix (named :M)

than for every vertex V in ur mesh will be multiplied first with the inverse of ur Model Matrix,after this ur vertex V is in the world space, after that it will be multiplied with the view matrix ,ur final vertex is in the camera space ( i repeat :if we suppose that V is the view matrix).

look at the first graph is this site:

http://www.paulsprojects.net/tutorials/smt/smt.html

Thank you Abdallah, that was a very helpful explanation. I believe things are clearing up now.

I still have a little confusion, which I’m hoping someone can help me wtih. It has to do with camera versus world space.

If I make these calls in OpenGL:

- glLoadIdentity()
- glMultMatrixf()

Where is my camera located after this happens and how is it oriented? Is it still located at (0,0,0) in world space, pointed down the -Z axis? Or has it been moved & reoriented based on the matrix multiplication above?

I ask because in the back of my mind there’s the modelview matrix duality the red book talks about; where you can think of transformations as occurring in a global, fixed coordinate system or in a localized coordinate system, in which the coordinate system changes with every matrix manipulation (glTranslate, glRotate, glMultMatrix etc)

So if I call glLoadIdentity() and then glTranslate(), followed by a few glVertex() calls, I’ll see some points or lines or polygons translated in the appropriate way. (This is assuming I haven’t translated them out of the view frustum)

So it would appear the camera stays at the origin and remains looking down the Z axis. But then methods like gluLookAt() imply you’re moving the camera, even though gluLookAt() internally is nothing more than a series of glTranslate() and glRotate() calls.

I understand all of the individual steps involved in the above statements, but it’s confusing for me to separate modeling from viewing transformations, since they’re one and the same, just thought of differently. That’s my problem–difficulty in separating the two in my mind.

for me i use always this approch for understanding OpenGL viewing:

there is no camera concept in OpenGL . by default ur Eye is pointed toward Z negative Axis. if u want to move ur Eye toward X positive axis, this is the same as moving ur object toward X negative axis.

think like a photographer that doesnt move his camera, instead he move(translate,rotate) the object . opengl use the modelview transformation bcoz there is a duality between moving objects and moving cameras .so as i said imagine that ur camera is always fixed and all u have to do is to position ur model in front of the camera .

Thanks Abdallah. Your explanation helps. And thank you for the help you’ve given me in this thread (you too, BionicBytes!)