Draco Compressing and uncrompressing with Blender changes file size

Hello there,

so I’ve been experimenting with gltf and Draco compression and the draco-compressed filesize is great. I noticed something strange though, for an example uncompressed *.glb file I have:

uncompressed.glb : 2580500 Bytes
compressed.glb : 495476 Bytes
reinflated.glb : 3444000 Bytes

So, I started with the uncompressed, used compression = lvl 3 option in Blender to export it using Draco.
Then I imported the compressed.glb back into a frech blender file, then I exported that one again, without compression unabled. The resulting file is bigger then the initial one.
How is that?

Here’s a screenshot of the files, curious how after Draco in & out the file is bigger then before. Is this a common thing?
Screenshot 2021-07-08 at 17.06.14

Important to note that a roundtrip through Blender does convert to the model into, and back out of, Blender’s internal representation. So for example, by the time you’re exporting a file Blender has no way of knowing whether the original GLB had vertex normals, and it will default to include them on the exported model unless you opt out. There could certainly be other changes, e.g. I would want to compare vertex counts before and after, too… one way to check what’s changed would be:

npm install --global @gltf-transform/cli

gltf-transform inspect a.glb
gltf-transform inspect b.glb

If you’re using Blender just to get the Draco compression, it may be easier to do the compression in a tool that allows doing that and nothing else, like glTF-Pipeline or glTF-Transform CLI tools.

1 Like

Thanks for the quick reply!

I checked using gltf-transform inspect and for example object #2 looks like this:
original: 14,602 vertices, compressed: 23,421 vertices, reinflated: 23,508 vertices.
The attributes are always the same, always NORMAL:float32, POSITION:float32, TEXCOORD_0:float32.
Curious that compressing it increases the count of vertices. ( glPrimitives count stayed the same)

I’ve tried gltf-transform draco to compress the file, and if I open the transform-cli-compressed file in blender and export it again, the resulting file is the same size as the original.

Is there way to decompress draco using gltf-transform cli?

(So it seems like the Draco encoder algorithm blender uses might be increasing the vertices?)

Does the vertex count also increase when exporting from Blender with compression disabled? The most common cause I see for these increases is welded/unwelded vertices, but there could be other causes… can also file an issue on glTF-Blender-IO to ask.

Is there way to decompress draco using gltf-transform cli?

Yep! Try “gltf-transform cp in.glb out.glb”, it will show a warning that this decompresses the file. I don’t think glTF-Transform will change the vertex count unless you explicitly run the weld or unweld commands.

So,
(1) if I import and export the original file in Blender it will have the same 3.4MB, so it was Blenders structure that increased the file. curiously when I imported the gltf-transform-draco compressed file in blender and reexported it, it only had the original 2.6MB.
(2) If I use gltf-transform cp to decompress the file it is also bigger: (both steps with gltf-transform, no blender involved)

Hm, I don’t suppose you can post the original GLB? Only other idea I’ve got would be the index list. If the geometry wasn’t indexed before, compression will generally index it, and decompression would leave that in place, unless you add gltf-transform unweld in.glb out.glb or something. Otherwise I think I’m out of guesses. :slight_smile:

(discussed model offline)

My guess above was close, but not quite right. The model originally used uint16 index storage, and after Draco compression+decompression the index storage was uint32. This increases the file size by about 20%, even though the vertex count was the same either way. Seems like a different cause than Blender, where the vertex count actually increased. I’ve filed a bug on glTF-Transform at https://github.com/donmccurdy/glTF-Transform/issues/283.

In the meantime, if you needed to “undo” that, it can be done as a script that brings the decompressed file back to the original size:

import { NodeIO } from '@gltf-transform/core';

const io = new NodeIO();
const document = io.read('./input.glb');

for (const mesh of document.getRoot().listMeshes()) {
	for (const prim of mesh.listPrimitives()) {
		const indices = prim.getIndices();
		if (indices) {
			indices.setArray(new Uint16Array(indices.getArray()));
		}
	}
}

io.write('./output.glb', document);
1 Like

Wow, thanks for looking into this, yeah, sounds like uint32 shouldn’t be necessary in general.

Thanks for building the gltf-transform tool, I didn’t know about it but it looks super useful.

Most of the model transformations we do happens in AWS and it looks like gltf-transform fits well for that.

1 Like

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.