I researched a lot about md5 model format, but some of the examples store the inverse bind pose. As far as i know, md5 model translation and orientations are absolute, not relative and inverse bind poses are used to convert local joint space to local object space (correct me if i’m wrong). My aim is to upload quaternions and positions to GLSL for vertex skinning in GPU. What should i do ? I mean, should i use inverse bind poses, or convert 3d and 4d vectors into 4x4 matrices? if so, why? what do they do ?
This isn’t really an OpenGL question. In the future, you might post these in the Math & Algorithms forum instead.
Hopefully someone more knowledgeable than I that’s actually used MD5 will chime in here (you out there kRogue?), but I’ll try to help you out.
Inverse bind pose takes you from object-space in the bind pose down to joint-local-space (the reverse of what you said).
My aim is to upload quaternions and positions to GLSL for vertex skinning in GPU. … As far as i know, md5 model translation and orientations are absolute, not relative…
I wouldn’t spend much time getting your head inside of MD5. Even if you know how GPU skinning works, MD5 can still be confusing. It doesn’t store the data in a format that’s GPU skinning friendly, which isn’t that helpful these days (for more detail, see this link). If you want to use an MD5 model for GPU skinning, I would suggest that you just use Assimp and this good OpenGL-based tutorial by Etay Meiri to do it:
Nowadays if you just wanted to animate a rigged model using a single animation track on the GPU, you’d just upload the joint final transforms for the animation to the GPU and use them directly to skin your verts and normals. These have the inverse bind pose transforms composited in with the posed transforms. And you’ve got a number of options for representing and applying these transforms on the GPU (e.g. matrices, quaternion-translation, and dual quaternions).