HW aided skeletal mesh deformation...

Well, I’m slowly building my own game engine, and since I’m aiming at commercial quality, I came to the conclusion that a simple animation system based on just rotations and translations (keyframed) wasn’t pretty enough for some complex objects such as characters.

I read a little about skeletal deformation, and found that approach quite promising for realism. The big issue, though, is speed. Display lists are fast, easy to use, and small. However, they just won’t suit such an approach.

What would be the fastest way of computing a deformation? Is there some sort of hardware implementation? Is it reliable enough, and widely implemented?

Or do I have to compute everything by hand? (transform each vertex according to the influencing matrices resulting from the bone(s) orientation(s)… which would imply quite a few FP mult…)

You can use a vertex program for “skinning”.

Or you can do it on the CPU.
In that case, you can save some cycles by not updating vertices if the skeleton has not changed shape.
Also, be sure to keep the joint’s effective tranformation matrix - i.e. relative to the object’s location and rotation. No need to keep pushing and popping matrices.
All you have to do after you’ve got your transformed vertices is set the rotation and translation of the whole object, and then call glDrawElements.

BTW: many objects probably don’t need animation anyway, so the impact of those calculations probably sn’t all that bad.
And you may be able to use those transformed vertices for your collision detection as well.

the deformation is really quick on hw, there is also a SSE paper about it from Intel, if you need a software solution, too.

but as T101 said, there normally arent that many objects with deformable surface, mostly only characters. however characters (unless you do some fancy hit detection on them) are normally represented by some primitives in collision, so I would favor hw skinning for them, although you wont have their transformed mesh in the app

I use following vertex program, of course depending on lighting you would add normal transformation and lighting computation too.

!!ARBvp1.0

# SKIN Shader 
#	2 weight skinning, unlit, fog, 2 textures
#	
#	skinning: two weights and matrices per vertex
#	fogcoord: distance to eyeplane
#	textures: no texgen, no texmatrix
#
#	by Christoph Kubisch

# Incoming vertex attributes:
ATTRIB inPos = vertex.position;
ATTRIB inTex0 = vertex.texcoord[0];
ATTRIB inTex1 = vertex.texcoord[1];
ATTRIB inColor = vertex.color;
ATTRIB inWeights = vertex.attrib[6];	# = { index0, weight0, index1, weight1 }

# Outgoing vertex attributes:
OUTPUT outPos = result.position;
OUTPUT outColor = result.color;
OUTPUT outTex0 = result.texcoord[0];
OUTPUT outTex1 = result.texcoord[1];
OUTPUT outFog  = result.fogcoord;

PARAM  mvp[4]       = { state.matrix.mvp };
PARAM  matrices[60]	= { program.env[0..59] };
TEMP   xfPos, temp;
ADDRESS arOffset;					# Address register used to read offsets.

# WEIGHT TRANSFORMS
# Compute First Weight
# get matrix index
	ARL		arOffset.x, inWeights.x;
# POSITION
	DP4		temp.x,		matrices[arOffset.x],	inPos;
	DP4		temp.y,		matrices[arOffset.x+1],	inPos;
	DP4		temp.z,		matrices[arOffset.x+2],	inPos;
	MUL		xfPos,		temp, inWeights.y;

# Compute Second Weight
# get matrix index
	ARL		arOffset.x, inWeights.z;
# POSITION
	DP4		temp.x,		matrices[arOffset.x],	inPos;
	DP4		temp.y,		matrices[arOffset.x+1],	inPos;
	DP4		temp.z,		matrices[arOffset.x+2],	inPos;
	MAD		xfPos,	temp, inWeights.w,	xfPos;


# VIEW TRANSFORMS
# Transform the vertex to clip coordinates.
	MOV		xfPos.w,	inPos.w;
	DP4		temp.x, mvp[0], xfPos;
	DP4		temp.y, mvp[1], xfPos;
	DP4		temp.z, mvp[2], xfPos;
	DP4		temp.w, mvp[3], xfPos;
	MOV		outPos, temp;
	
# Output Fog
	ABS		outFog.x, temp.z;
# Output Color
	MOV		outColor, inColor;

# Output Tex
	MOV		outTex1, inTex1;
	MOV		outTex0, inTex0;

END

First of all, there is no such thing as “commercial quality”. If you can make a living from your stuff, it has “commercial” quality :slight_smile:

Skeletal animation and matrix palette skinning (aka. “softskinning”) is computation heavy. The OpenGL API has extensions for matrix palette skinning, but they are meant for fixed function. If you’re writing a shader, you must roll your own.

Most of the times you are going to transform more than one vector, say position and normal. It is optimal to first accumulate the weighted matrices into a final matrix and then transform your various vectors with the final matrix.

Okay, I’ll try to handle all this. (I’ll combine animation techniques and only use deformation for those that really need it.) (I’m really just starting on animation, and my openGL knowledge looks like swiss cheeze)

Just a question for now: What’s a “Vertex Program”?

Code executed by the 3D card (hopefully) for every vertex you send as long as the Vertex Program is active.

It can’t invent new vertices, but it can modify them.

The term in Direct3D for this is “Vertex Shader”.

All right, that looks promising. Two more questions (for now):
There’s OpenGL Shading language. Is it recommended now? How widely is it implemented?

And would any of you happen to have a good starting point, or some tutorial about how to make one of these shaders? (I read a chunk of OpenGL’s shading language specs… not too friendly.)