I’m an S/W engineer from a volumetric capture company and I’m new to glTF community.
We are using the structured light camera array to capture dynamic objects like the human body and display the volumetric asset on mobile via AR. But, the final mesh models are too large to transmit and render on mobile devices.
For example, if I set the fps to 30 on capture mode and to record for 10 sec, I will get an asset with 300 frames. In each frame, there is a mesh model that contains about 100k to 200k vertices on average. We tried various techniques to reduced file size. Such as decimation, Draco compression, H.264 compression, key-frame based compression. We reduced a 10-sec clip size from a few gigabytes to ~600MB. However, we are still looking for better compression algo and transmission methods.
We found glTF is a great 3D format for both transmission and compression. Especially, it is GPU memory efficient for mobile devices. From my initial research, it looks like glTF serves our purpose very well except it doesn’t consider too much for volumetric data which contains multiple frames in one asset. Each frame has an associated timestamp for playback. Does glTF support storing such volumetric data and are there any workaround exits?