# modelview matrix optimization

Most textbooks say, the modelview matrix is a product of linear transformation matrices, followed by a translation, like this:

MV = R * T(-origin)

In my app, I usually transform my models like this:

MV * M

where M is the model transformation matrix. However, matrix multiplication is associative so I can write the above product as:

R * (T(-origin) * M)

the T(-origin) * M

product can be simply calculated by putting translation values in the rightmost column of M, saving one matrix multiplication (R * T(-origin)). I’m using this scheme in my app and it works nicely, now are there any problems with it? Should I not be using this optimization?

Most textbooks say, the modelview matrix is a product of linear transformation matrices, followed by a translation, like this:

The “modelview” matrix is generally considered to be a matrix that transforms vertex positions from the space they are stored in as attributes to camera space (or eye space, if you prefer), which is the space expected by the projection matrix. Thus “modelview” is literally that: model to view. The modelveiw matrix specifically and deliberately skips a transformation to an explicit world space.

MV = R * T(-origin)

doesn’t make any sense with regard to that. Is MV supposed to be the modelview matrix or just the world-to-camera matrix.

If MV is supposed to be just the world-to-camera matrix (which only a component of the modelview matrix), and R is the rotation component and T is the translation component, then that makes some sense.

However, the most common code for creating a world-to-camera matrix is some derivation of gluLookAt. The most common implementation of this function does not create a separate rotation and translation matrix and multiply them together. So I don’t see where this “Most textbooks say” stuff is talking about. It would seem that the most common code already has your optimization.

Lastly, it’s not really an optimization. You’re talking about the time to compute a single matrix. One that you only need to compute once per frame. That won’t even show up on a profiler, let alone on have an actual impact on your overall framerate. You’d get more out of optimizing the construction of M (the model-to-world) than the construction of MV (world-to-camera).

Most textbooks multiply R and T from what I see, but I use this to render a huge world, where the floats lose accuracy in the far reaches of it. It’s true most books use different terms to describe the matrix, such as a viewing matrix, or world-to-camera matrix and others. Also, most the them don’t even mention gluLookAt().

I calculate the translation component separately on the CPU, to not lose float precision. If I were to use doubles, would this not impact performance? If you think of it this way, maybe it is an optimization.

It’s true most books use different terms to describe the matrix, such as a viewing matrix, or world-to-camera matrix and others.

That doesn’t change the fact that you’re using the terminology wrong. The word “modelview” has an accepted definition; the term itself comes from the OpenGL enumerator GL_MODELVIEW. The OpenGL fixed-function pipeline defines the GL_MODELVIEW matrix as the matrix to be applied to vertex position attributes, which transforms them into the space expected by the projection matrix. This is also the space that lighting is done in.

The modelview matrix is not just the transform from world space to camera space.

I calculate the translation component separately on the CPU, to not lose float precision. If I were to use doubles, would this not impact performance? If you think of it this way, maybe it is an optimization.

My point still stands. While what you’re doing is technically faster, it will not impact performance. This is a single matrix computation. You do this once per frame. You could put 30 arc-cosine operations in the computation, throw in quad-precision floating-point math, and do twelve matrix multiplies, and you’d never notice anything about the speed or lack thereof of your application.

The overall performance of your application is not dictated by the performance of a once-per-frame operation unless that operation is incredibly expensive (something that provokes a GPU sync, for example). You have to do most normal operations a lot before they become relevant for performance.

This is why you will often hear programmers say not to optimize anything until you have actual profiling data in hand. Because until you do, you do not know where your performance is going. You could spend a week optimizing some function only to find out that this work does nothing for performance because the code is only called three times per frame.

Is it just, that you upload double matrices and float vertex attributes and everything gets promoted into doubles on the GPU? Still, even if floats are promoted to doubles, this has to impact performance and I would probably have to do every single transform using doubles, not floats.