Skeletal Animation - Arbitrary number of bones and influences

Hello all

I am working on implementing a skeletal animation system and I would like to support an arbitrary number of bones, as well as an arbitrary number of bones that can influence any particular vertex. All of the samples I have seen have been limited to a very few number of bones, or create a large matrix array that can loaded with an arbitrary number of matrices (which seems like a large waste of resources to me)

The number of weights and indexes per vert is also usually limited to 4, passed in as a vec4.

Any suggestions on going about this? Passing everything in as textures perhaps?

I would like to support an arbitrary number of bones, as well as an arbitrary number of bones that can influence any particular vertex

… why?

In any case, you can do more or less whatever you want, now that you can use image load/store/SSBOs to read whatever you like based on gl_VertexID. How you intend to pass arrays of differing lengths is up to you. You could have an array of offsets and lengths, indexed by vertex index. You would also have a giant array of weights and bone indices. The offset would be the offset from the beginning of that array to the first array index for that vertex’s bone weights/bone indices. The length would be the number of weights/indices used.

Of course, by doing this you’re sacrificing some performance, since you’re accessing that array in a very non-cache friendly way. But you clearly didn’t care about that when you decided to abandon the traditional methods. Also, you probably aren’t losing that much performance.

I just hope you don’t expect to see any major improvement in visual fidelity through this.

create a large matrix array that can loaded with an arbitrary number of matrices (which seems like a large waste of resources to me)

Well… what else are you going to do; have the shader manufacture those bone matrices from nothing? The only way the shader can get access to the matrices for each bone that affects a vertex is to access it from some kind of array. There’s no getting around that one.

Whether you call that array a “texture,” a “UBO” or “SSBO”, it’s still just an array of matrices.

Shader Storage Blocks can contain data of arbitrary size (for the last member)

[QUOTE=Alfonse Reinheart;1281899]Of course, by doing this you’re sacrificing some performance, since you’re accessing that array in a very non-cache friendly way. But you clearly didn’t care about that when you decided to abandon the traditional methods. Also, you probably aren’t losing that much performance.
[/QUOTE]

I certainly don’t think the snark is necessary, seeing as google searches are not providing me with WHAT traditional methods are (I find it hard to believe that all implementations out there are limiting themselves to 4 weights/vertex.) I was simply looking for some guidance on how this could be accomplished

[QUOTE=Alfonse Reinheart;1281899]Well… what else are you going to do; have the shader manufacture those bone matrices from nothing? The only way the shader can get access to the matrices for each bone that affects a vertex is to access it from some kind of array. There’s no getting around that one.

Whether you call that array a “texture,” a “UBO” or “SSBO”, it’s still just an array of matrices.[/QUOTE]

Naturally the data has to come from somewhere, but allocating 50 or so 4x4 matrices (or at least 50 vec4s for 2 quaternions) for a model that has 5 bones seems like a huge waste of resources if just those 5 bones could be passed to the shader

You did find the traditional methods. You just don’t want to believe that these are the traditional methods.

The fact of the matter is that more than 4 weights per vertex is just not necessary most of the time. The majority of vertices aren’t weighted to more than 4 bones at all. And for those that an artist might want to do so, the percentage of contribution from a fifth or sixth bone will be so small that you wouldn’t notice if it weren’t there.

4 weights per vertex is “good enough” for most users who are interested in high performance rendering. The results you get are generally not worth the effort of allowing more.

Which would require having a different shader for every single count of bones. And the only difference between those shaders is bone count.

Changing shaders is far more expensive performance-wise than changing bound buffers.

But, as Spoops pointed out, you can pass an array who’s length is defined by the range of memory provided by the application as an SSBO. The shader is able to query the length of such unbounded SSBOs. Of course, you won’t see much code like this online, since SSBOs are (relatively) new.

[QUOTE=Alfonse Reinheart;1281902]Which would require having a different shader for every single count of bones. And the only difference between those shaders is bone count.

Changing shaders is far more expensive performance-wise than changing bound buffers.

But, as Spoops pointed out, you can pass an array who’s length is defined by the range of memory provided by the application as an SSBO. The shader is able to query the length of such unbounded SSBOs. Of course, you won’t see much code like this online, since SSBOs are (relatively) new.[/QUOTE]

Which is why I am looking for a method to pass an arbitrary number of bones to the shader, so that the same shader could be used for any number of bones, but only that many bones would need to be allocated in memory.

I will certainly look into SSBOs, but if a static array of bone data is the way it is typically done I will go that route as well.

I answered SSBOs because it was what you asked for, but as Alphonse said, I don’t think you really want them. SSBOs are meant to store huge loads of data without being limited to the format of a texture, but it isn’t the issue here. It’s better to just define a max number of vertices in your shader and send only the data you need, as SSBOs aren’t really uniforms, and thus slower to access (don’t quote me on this, but in most implementations it’s the case).

It works the same as in most programming languages, where dynamic allocation just ‘saves space’ while it creates memory fragmentation, pointer chasing and lots of cache misses (when used wrong). A typical buffer with a fixed size even if ‘too large’ will save lots of cpu cycles.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.