I have successfully created gltf files that include “Modal animation” of meshes. These use a single morph mesh for each node plus a triangular waveform of weights [0.0, 1.0, 0.0, -1.0, 0.0] and I animate the weights.
Now I need to add support for “transient animation”. For this case, the deformation of each mesh vertex is independent of every other vertex deformation for every animation frame so I can’t use the scaled weights approach - at least not directly
It’s straightforward enough to add multiple morph meshes to each Node - one for each animation time. But I can’t figure out how to animate the morph meshes directly?
If anyone can provide guidance, I’d be very grateful.
Also - is this the preferred approach to such transient animation? My use cases may include thousands of time points in an animation sequence - every vertex having a different deformation from every other vertex across every animation frame. Or is there a better solution to this problem?
Thanks in advance,
If each mesh has N morph targets, each morph target representing the state of the mesh during a single frame of animation, then it sounds like what you want is an animation that cycles through the morph targets? Something like this, with each row being a keyframe and each value being a morph target weight:
1 0 0 0 0 0 0 ... N
0 1 0 0 0 0 0 ... N
0 0 1 0 0 0 0 ... N
The specification has a section on animation modifying the weights of a morph target set, toward the end of the Animation section. Blender can also export sequences of this type, it’s a common result if you export vertex deformation to MDD, re-import the MDD to Blender (baking the morph targets), then export to glTF.
Also - is this the preferred approach to such [per-vertex] animation?
I think it’s the best approach, of those that are supported in standard formats. Alternatives do exist that are more efficient — like Houdini VAT textures, or custom shaders — but those approaches haven’t really made their way into standards so far, and you’d probably need custom code in your viewing application to support them.
I came to the same realization and implemented this “identify matrix” approach on the keyframe output and it works quite well.
I was able to display my test model with 1,000 animation frames on the Babylon Sandbox but when I increased to 10,000 frames it died. The .glb was around 500MB so I was a little disappointed
Are there any limits in the standard that may be causing issues?
I’d expect you can reduce the file size considerably by using sparse accessors, for any accessors where < 33% of the values are non-zero. However … it’s still going to be unpacked in GPU memory, and that’s a lot of vertex data for most viewers. Quantizing data from float32 to int16 or int32 would help as well, but there are definitely upper limits to the approach of storing the state of each vertex at each frame of animation.
It is worth noting that some engines implement morph target animations using additional vertex attributes, and others use textures… I wonder if there’s a major difference between the two in cases like this, with high potential memory costs.
The only buffer that I see that has <33% non-zero is the animation output and that’s tiny compared to the geometry and deformations. Or am I missing something?
I like the quantization idea - the normals and colors will for sure tolerate this - but I have no idea how to represent that in the gltf? Doesn’t the standard forces these to be all float32?
It’s common that the buffers storing the morph targets could be sparse, because they’re relative deltas from the base positions. But if most vertices move on every frame that may not be the case.
If you append the KHR_mesh_quantization to the file’s extension list, you can use a wider range of vertex attribute types for quantization. That extension is widely supported and should work well in online viewers.
For a quick test here —
npm install --global @gltf-transform/cli
gltf-transform quantize input.glb output.glb
This pairs well with Meshopt compression (EXT_meshopt_compression) to bring size down further.
gltf-transform meshopt input.glb output.glb
My current use cases is transient dynamic finite elements - more or less every vertex deforms a different amount within a frame AND across frames. So they are almost never sparse.
Thanks for the KHR_mesh_quantization link - sounds promising