Hello, I’m using Quaternion-Translation to animate my model, I don’t want to use matrix. Each bone has vec3 rotation and vec3 position. I mean if I use matrix, it’s just simply multiply matrix local parent with matrix local child to get the relative position, what about quaternion?
I know Quaternion is just a representation of rotation. I stored the rotation in quaternion and the position in vec3, how to get the default bind position depending the joint’s parent and child? I’ve tried to multiply the parent quaternion with child quaternion and then sum the child position with parent position. And this is what I got:
While it should be like this:
Hello, I managed to get the default pose position but when I try to animate the joints but I can’t get it working.
I have 3 Quaternion-Translations, they are local, bone, and final.
Here what I’ve done:
For the default bind pose I stored the joints rotation & position to bone QT by multiplying it with local parents, rotating it, and add the value to it’s parent.
For the animating joints I interpolated the quaternions using SLERP between 2 keyframes and multiply it with the local position, rotate it, and add the value to the parent.
Here is the result:
pAnim.pBones[iIndex].vTransform = vTransform; // vTransform is the interpolated position
pAnim.pBones[iIndex].qFinal = pAnim.pBones[iIndex].qLocal*qTransform; // qTransform is the interpolated rotation
pAnim.pBones[iIndex].vFinal = pAnim.pBones[iIndex].vLocal+vTransform;
if (iRoot != -1)
fVec3 v = Vector_Rotate(pAnim.pBones[iIndex].vLocal, pAnim.pBones[iRoot].qFinal);
First, does your code work correctly if you use matrices?
If it does, then you just need to consider that a QT pair is just a compact representation for a matrix. Take your existing code, and replace any matrixmatrix and matrixvector multiplications with QTQT and QTvector multiplications.
A matrix representing a rotation and a translation has the form
[R R R T]
[R R R T]
[R R R T]
[0 0 0 1]
where the R part is the rotation and the T part is the translation. For brevity, this will be written [R|T].
Given two such matrices M1=[R1|T1] and M2=[R2|T2], their product is M1M2=M=[R|T] where R=R1R2 and T=R1*T2+T1.
Given a matrix M=[R|T] and a vector V, their product is MV=RV+T
QT representation just replaces the R portion with a quaternion, using 4 values in place of 9.
Thus, to multiply two QT pairs [Q1|T1] and [Q2|T2], the quaternion part of the result is Q1Q2 and the translation part is Q1T2+T1, while multiplying a QT pair [Q|T] and a vector V results in the vector Q*V+T.
To add to what GClements said, after you verify your code works fine with matrices (on the CPU/C++ side and in the GLSL shader), you don’t need to modify any of the matrix calculations on the C++ side.
All you need to do to get the skinning quality benefits and shader uniform footprint reduction benefits is to convert your skinning transforms from matrix to QT or DQ form right before you pass them into the shader, and modify your shader to accept QTs or DQs instead of matrices.
It’s totally optional and a complete side issue whether you actually use QTs or DQs to composite transforms on the CPU side.
Hello, I came here to thank to both of you, finally I can get the joints work perfectly, however, there’s another last problem I have. I couldn’t get the mesh deformed correctly.
Could you share how to deform mesh algorithm? After I got the joints working, I’ve been searching on the internet how to deform the mesh in the correct way, but lot of them use matrix, there’s an article of MD5 using quaternion to deform the mesh but when I applied the mesh deformation technique, it does not give the result I want. Any reference? I’ve been trying to deform the mesh since 4 days ago.
As you can see here, the joints are moving as I want
There are two basic approaches for vertex specification. One is to specify the vertex position in bone space. The other is to provide absolute vertex positions for a specific reference pose.
For the former, you can either:
a) For each bone, transform the position by the bone’s transformation then multiply by the bone’s weight. Then add all of the results. This is Linear Blend Skinning.
b) Blend the transformations for each bone, weighted according to the bone’s weight. Then transform the position by the blended transformation. This is Spherical Blend Skinning, and effectively requires the use of dual quaternions.
The latter is similar, except that you obtain the bone-space position by transforming the absolute position by the inverse of the bone’s transformation in the reference pose.
AFAICT, MD5 uses the former, i.e. each vertex is stored as a list of bone-id, weight, and bone-relative position.
DavidJr, to add to what GClements has already said, here are the three most common skinning algorithms:
[li]LBS - Linear Blend Skinning. [/li]As GClements described. Simple to apply, so a good first skinning technique to try, but is conceptually nonsense when multiple joint influences are involved, which leads to joint collapse and candy wrapper artifacts.
[li]SBS - Spherical Blend Skinning.[/li]This is the form which lends itself to Quaternion-Translation (QT) form for your skinning transforms. It works fine if you limit your joint influences to at most 2 per mesh vertex, and insist that they be adjacent joints. The reason is that these have a common rotation center, so it’s easy to just rotate around that joint (pure rotation transform). Use SLERP or QLERP for this rotation blend. However, often you want 3+ joint influences around a joint (and/or non-adjacent joint influences). This is a pain in the butt with SBS because you have to solve for a “compromise” rot center for each influence permutation (which is just a bandaid) and use more complex skinning code that requires more data. Use of DQS avoids all that mess and looks better.
[li]DQS - Dual Quaternion Skinning. [/li]Here, you pass in your skinning transforms in dual quaternion (DQ) form as opposed to QT or matrix form. These allow you to blend both the rotation and translation components of the transforms in a physically-plausable way. These extend trivially to 3+ joint influences and non-adjacent joint influences. As GClements said, here you blend transforms and then use the blended transform to transform the mesh vertex.
Having implemented all 3 of these techniques in the past, I’d suggest you implement LBS just as a first-step to verify that your per-joint skinning transforms are good (i.e. pass skinning transforms into your shader as matrices, and use LBS in the shader), and then once you’ve tested/debugged that, flip to use of DQS for higher skinning quality (i.e. change to passing in per-joint skinning transforms as DQs, and use DQS in the shader). You can even download ready-to-run shader code for the DQS technique off the net (see below).
If you want to read more about SBS and DQS, here are a few references: