# help me understand gimbal lock

First, I’d like to say I did a search on the forum and on the web about the subject. I still don’t understand it and was looking for some help.

Most of the explanations of gimbal lock explain it by saying that you’re “rotating an axis onto another axis, loosing a degree of freedom.” This is what does not make sense to me. From my understanding of transformations, when you have a rotation about an axis in matrix form and you apply it to a vector you can think of it as a) rotating a point about the global cartesian coordinate axis and getting new coordinates of the point , b) expressing the point p in local coordinate system as p’ in another coordinate system that is rotated by the matrix…
How does any of this align any of the axis? In understanding b, the coordinate system is transformed and is still orthogonal.

Finally I’d like to know how this affects me. I need to accumulate some transformations on the camera that is to be rotated/moved arbitrarily. Suppose the camera is at some position with up vector defined and view direction. I then rotate it around X/Y/Z axis…append it to the transformation matrix etc. Is there a chance of gimbal lock if for each transformation I generate a matrix and append it to the other matrices I keep track of.

Thanks for clarifying this for me.

Euler rotations have an order they are performed in.

Now, imagine an aircraft pointing north and flying level. It’s heading and pitch are 0 degrees and 0 degrees respectively.

These correlate with heading = rotation about y (in OpenGL default coordinates) and pitch = rotation about the x axis.

Now imagine the aircraft pitches up to the vertical. This would be zero heading and 90 degree rotation about x. This is a common gimbal lock configuration because if the aircraft keeps pitching up through the vertical the heading suddenly changes from 0 to 180 degrees. Now if you actually change that heading you must change roll (rotation about y) instantly by 180 degrees and start reducing pitch (the roll applied in a dynamics model will handle this).

It is this sudden flip of the heading as pitch flips through the vertical axis and teh instant change in derivative of pitch that causes gimbal lock. In 3D systems it can cause issues if the application of pitch & roll are handled incorrectly because the model that does not simultaneously flip roll will rotate back up to the vertical and spaz around the axis (and many other similar related mathematical errors are possible).

In physical systems the requirement to suddenly rotate through the vertical by 180 degrees can cause mechanical problems. Just understand that with ordered vector rotations about cardinal axis when it rotates through the vertical (in this case) the instant flip of the other rotation has the potential for ‘gimbal lock’.

Accumulating transformations is the right way to go if you add it as a transformation frame to an orientation matrix. Just watch for accumulated rounding errors (keep normalising the 3x3 vectors and ensuring it is orthogonal).

P.S. by accumulate I mean calculate the incremental transform relative to the camera in camera space and multiply it with the camera matrix to produce the new camera position.

Adjusting heading pitch & roll directly using Eulers is what gets people into trouble because it is not well behaved w.r.t. interpolation and the derivatives flip causing gimbal lock if you don’t manage the multi-axis flip correctly.

ah thanks. Much clearer to me now too.

Thanks.

So if you are keeping track of your rotations in a roll pitch yaw “vector” and adding/subtracting numbers to each element for rotation you run into problems. This problem would never occur if for every rotation about X, Y or Z you multiply your transform matrix with the Mx, My or Mz transformation matrices?

Heading, pitch & roll, not roll pitch yaw, yaw implies premultiplication of yaw rather than post multiplication of heading. So having yaw in there is particularly useless for intuitive concepts like figuring out what the heading and pitch is, it totally stuffs things up at least from the point of view matching compass heading etc to the rotation values you have, you can still apply yaw etc in the simulation framework, I just think it’s incredibly clunky to describe the orientation of anything as pry, and it matters when interoperating with other systems it’s not merely aesthetic and I’ve seen systems do this and be made useless as a result (MultiGen creator for example stuffed up its DOF node with an idiotic rotation order).

If your transformation is incremental and not too severe yes, imagine the delta THIS FRAME relative to the aircraft position, simply multiply that with the current location and it works. It’s a 4x4 multiply (3x4 really) then cleanup.

I describe this in the motion model section here:

``````[http://www.sgi.com/products/software/performer/brew/aqua.html](http://www.sgi.com/products/software/performer/brew/aqua.html)
``````

Although I would certainly write this differently if I were writing it today, (it is about 10 years old). Of course you can tidy the 3x3 rotation sub matrix and you don’t need to extract hpr as I did, you can simply use it directly in your viewing stack (or use the transpose for viewing depending on the details). I extracted hpr because the middleware I was using required it.

P.S. I guess from a vehicle’s point of view direct manipulation of roll pitch yaw makes sense, so you’re right in that regard, you have rpy inputs + simulation which alters the orientation. So I retract my objection except in so far as realizing final vehicle orientation in those terms because in world space rpy makes less intuitive sense, than hpr but in vehicle space it actually does.

Oh well, just one more reason to multiply the incremental dynamics in vehicle space end of the transform to modify it’s world matrix.