Some advice for model animation/rendering

Hello, I’m doing some work on a school project, including model animation in form of a skinned mesh, deformed by a skeleton via rigid/unweighted bone influences.

I’ve written an exporter and a developed a simple model format for getting the data into my demo.
What I export is basically ready-to-use vertex arrays + bone data and vertex influences.

I will do the skinning on the CPU for now, as the demo has to run on somewhat lowend hardware.

Obviously, I didn’t quite think this whole stuff through before I started…

What I need some advice about is how to acctally deform and render the mesh. It’s a piece of cake using intermediate calls, just transform and draw as you go, but to do this efficiently with VA’s seems a bit harder.

The obvious solution is to “render” the transformed data into a temporary buffer, and then draw that one using VAs, udating on each frame.

Could someone please detail this step for me, or give me some pointers, as I have somewhat of a hard time to get it all together… I figure the effort spent on building single-indexed VA’s from MAX’s triple-indexed ones might be completely wasted after all.

Furthermore, I would like some advice on on making certain features of the models “glow”, in the sense of lightning (not lightning the polys, or changing material properties to have high emission). The goal is to have fancy “glow in the dark”-effects on the models.

The only solution to this that I have found is to rip these polys out, and draw them after the main model via a separate draw call…

Thoughts and directions much appreciated
Martin Persson, Sweden

Building vertex arrays is hardly an advanced question, I’d suggest you read up on them in the OpenGL redbook.

In OpenGL vertex arrays, you index all the vertex attributes (normals, colours, texture coordinates, weights, etc) using only one index, so as part of your importer step you’ll need to convert the triple+ MAX indexing to a singly indexed mesh.

The glow effect that you see more and more in most video games is a trick. Basically you tag all the geometry you want as “glowing” and when you render:

  1. Render the scene normally
  2. Rendering the glowing bits into an offscreen buffer (which may or may not be smaller than the screen).
  3. Blur that buffer using a guassian blur in a pixel shader
  4. Render that buffer ontop of the scene as a fullscreen quad.

If you don’t like pixel shaders, in step 2 you can render the glowy bits to a smaller buffer and then later in step 4 when you render it, the magnification filter for the texture will give you some bluriness… but it won’t look as good as the gaussian.

For faster rendering models, I’d look into using VBOs instead of VAs, and using a vertex program to do deformation/skinning.

The pseudo-code at the bottom of this page shows how to do skinning.

There’s also the open source character animation library Cal3D which ought to have a bunch of code you can read.

I would suggest, if at all possible, to do skinning on the GPU with a vertex shader. It’s not too hard, though it sucks up a lot of per-vertex constants. With even a GeForce 3, you can get up to 60 or so bones in a single shader (using quaternions and positions).

Now, if you multipass more than, say, 2 passes, you might want to reconsider doing this on the GPU. That’s because the vertex shader will need to do the position computations multiple times. If you do this a lot, this can take quite some time, whereas on the CPU, you can do the skinning once.

I’m sorry, I migt not have been clear about what I was asking about.

In short, these are optimization questions.

The first one regarded skeletal skinning. As I understood it, I must first output the transformed vertices into a temporary buffer, which I then send for rendering.

What I wanted to know is wether this is the right way to do it, or if there is another way around.

Furthermore, I would like to know if there is any gain in performance by using VBO’s for this highly dynamic data (for meshes about 1500 triangles), or perhaps regular VA’s do the trick better, because there is no creation overhead for each frame.

As for the second question, I need to make ceratin subsets of my mesh geometry “glow” in the dark, ie. have different lightning properties.

I figured that I could create this in two ways, either by rendering this geometry subset with lightning disabled, or render them with changed material attributes.

Either of these two ways requires a material or state change during a draw call, which is not possible as far as I know. (Or is it? That’s why I posted this).

Finally, I HAVE to implement this on CPU, it’s in the assignment. I will start playing with shaders soon enough, but for now I’ll stick to CPU.