# Rotating Quanternion around a point

So i’m trying to Implement skeletal animations, I can rotate my 3d mesh around 0,0,0, but i dont know how to change that point. It is quite necessary to be able to rotate around a point because if i rotate an arm, i want it to rotate around the shoulder socket, not the belly button. If anybody could send me in the right direction or explain to me how rotations around a point with Qanternions work, that would be great! Thanks!

Hey,how do you rotate a Quanternion around a point?

To be more specific, if i want to rotate the shoulder of a humanoid, how do i know what position the socket of the shoulder is? And then from there how do you rotate a Quanternion around a point(such as the shoulder joint)?

Edit: So i read that you move the vertex being effected by a bone o the origin. But equal distance from the origin that it was from the rotation point. So another question, How do i know where the position of the bones are? The only informations about bones i have from the colalda file is the initial transformation matrix, and the key frame matrixes.

So to sum it all up, How do i know the position of the joints?

You don’t. A quaternion represents a rotation. It doesn’t have a position, so you can’t rotate it around anywhere.

I assume what you meant to ask was: how do you represent a rotation about a point?

In which case, the answer (for OpenGL’s purposes) is that you use a matrix. First, you convert the quaternion to a rotation matrix (a library such as GLM should be able to do this for you, if you don’t already have the code). Then, if you want to construct the matrix for rotation about a specific point, you compose the rotation matrix with equal and opposite translation matrices, i.e.

``````M = T(x,y,z) . R(q) . T(-x,-y,-z)
``````

But you probably don’t want to do that. The usual way to perform skeletal animation is to construct a tree of transformations by composing alternating rotations (for the joints) and translations (for the bones).

So, suppose that the hierarchy is: pelvis -> (hip) -> thigh -> (knee) -> lower leg -> (ankle) -> foot.

You start with a translation representing the position of the pelvis, followed by the rotation at the hip, followed by a translation of the length of the thigh, followed by the rotation at the knee, followed by a translation of the length of the lower leg, followed by the rotation at the ankle.

If you take each of the matrices you constructed, and transform the origin point (0,0,0,1) by them, the transformed vertices are the centres of the joints. If you draw lines between successive vertices, you’ll get a “stick man” representation of the skeleton.

Okay so when i export my rigged mesh i only get the transformation matrices (collada file), I dont get a position in space where the joints are. Oooor, are the transformation matrices actually representations of the position? In that case, How do i get the positional data of a joint from a transformation matri? Thanks!

On a per-joint basis, there are two transforms involved.

1. The orientation transforms for the joints (one per joint) define the “bind pose” of the skeleton. This is the pose that the skin mesh is rigged to the skeleton.

2. Then there are the animation transforms for the joints, which are encoded in the animation keyframes (one per joint per animation per keyframe). These define how to animate the “bind pose” skeleton, moving it into a new pose (the pose indicated by that keyframe).

See this post for details:

Your orientation transforms typically are (rotation * translation) transforms, while your animation transforms are typically rotation-only transforms (because bones don’t usually change lengths :-).

So to your question of the position in space where the bones are … you need to specify what space you want them in. Let’s assume the bind pose, in which case your orientation transforms are all that matter. Composite your orientation transforms up the skeleton from the joint you’re interested in to the root joint (B=OOO*…; that gives you the bind pose transform for the joint, which is a transform that takes you from joint-space-to-object-space). If your orientation transforms were rotationtranslation transforms, so this composite transform is a rotationtranslation transform. The translation component of this transform is the position of the joint in object-space (in the bind pose).

(noan567, here’s my response to your cross-post on the Advanced forum, which I deleted. Please don’t cross post):

Ok, reading between the lines it sounds like you have two problems: you don’t understand quaternions yet, and you’re trying to apply them to skeletal animations which you don’t understand yet.

I would “highly” recommend separating these two for now. Learn one cold. Then add the other. I’d pick the one you’re most familiar with and run with that.

As to quaternions, I’d stop thinking of them as objects to rotate. Think of them as “the rotation”. It’s just another way besides a 3x3 orthonormal matrix to represent a rotation about an axis. There’s some subtleties, but conceptually “that it”!

Before you go applying them to skeletal animations though, I’d strongly recommend understanding where Linear Blend Skinning falls flat (w.r.t skinning quality), then understand Quaternions well, then read up on Dual Quaternions which are actually better for skeletal animation than mere Quaternions (Ben Kenwright has a pretty good whitepaper here: Dual Quaternions). For more details on why you’d want to use Quaternions or Dual Quaternions with skeletal animation, see this post: Re: Skinning algorithms. And ask questions if you’ve got 'em! Skeletal animation is a little tricky to pick up, especially since a lot of the blog/wiki-type info on it out there is sketchy at best, and it seems like everybody uses different terms and different transform groupings when describing the math.