Is glTF suitable for streaming deforming meshes? Does it do time based compression similar to video codecs?


I’m looking for something to stream deforming 3d geometry data, possibly with changing topology (vtx count), with as little bandwidth as possible.
Is this something that glTF is suitable for? Does it have any over-time based compression like video codecs have to minimize the data flow?

I’m not talking about rigged characters with bones+weights etc.


Unfortunately, I’m not aware of any standard format suitable for that, including glTF. You can do sequences of morph targets (or “shape keys”) for a similar effect, if the topology does not change, but the bandwidth cost is fairly high for this method.

There are some proposals on the glTF GitHub repository related to MPEG-based extensions like you describe, though I don’t know much about them myself.

Or, I’ve seen techniques like Houdini Vertex Animation Textures used in various software, but that generally requires custom implementation in your content pipeline and rendering engine.

Hi and thanks a lot for your reply!
I’m basically looking to stream a 3d ‘scene’ generated by a depth camera or maybe at a later point some kind of lidar.
To minimize the data I felt it should use some efficient compression for vertex points that move very little.

Good to know that this is not a feature of glTF, at least not yet. I’ll probably continue my idea of just using a video codec such as hevc since in camera space the vtxes dont move in X or Y only in depth. Just have to figure out how to get high enough bit depth with a video codec.

Thank you.