Optimization of file size

hi guys,

i’m new to gltf.

i have a relatively large scene with complex geometry potentially. my single buffer contains only index and position + normal for each vertex. i have no firm direction on how to make the file size smaller.

my thoughts so far:

  1. indices were store in 4 bytes int which perhaps unnecessary? because i found people tend to use 2 bytes unsigned short + situational alignment (another 2 bytes).

i started with unsigned short (65535 max). also, each one of my triangles were treated individually (i mean no strips or fans, though “duplicates” happens), each short represents a unique index (just incrementing from 0 to max).

my scene is quite large, so i figure why not just use 4 bytes int, much larger range. but i believe the downside is space was wasted before reaching ushort max.

am i supposed to stick with ushort for storing indices and use multiple buffers instead?

  1. about “duplicates”. how to define “duplicates”?

if i am looking at position only, there are duplicates definitely. but normals are different on the duplicated vertices. does it just make them unique? i googled around and found out that a vertex cannot have multiple normals.

i took a guess (potentially wrong) that rendering pipeline just do vector sum of all normals at a same location. what do i do then? find a way to combine normals for each position before export? shrink the buffer from 1 position & multiple normals to 1 position & 1 normal?

should i look into triangle strip or fan?

could anyone point me the right path? thanks in advance.

Hi @glrules!

If you’re not able to share the scene itself, could you run the model through https://gltf.report and share the stats from the inspect tab? Or this would also work:

npm install --global @gltf-transform/cli
gltf-transform inspect model.glb

That should get you a better idea of how much size your vertex attributes, indices, and other resources are actually using. Without knowing that it’s hard to guess what optimizations you need.

but normals are different on the duplicated vertices. does it just make them unique? i googled around and found out that a vertex cannot have multiple normals.

Right, if vertices at the same position need multiple normals (e.g. to display a hard edge) then you can’t merge them. That’s fine, just try to merge the remaining vertices. If all edges are hard edges, you could omit vertex normals entirely, and the viewer will calculate flat normals itself.

2 Likes

hi @donmccurdy!

thanks for introducing me to the tools. they are amazing!

i wonder if any moderator could grant me more posting options? like file upload? or how do i progress my account?

i prepared a quite long reply with supporting files. only being prompt “new user cannot do this” after click on Reply. :joy:

Try it again. I think you should be good to upload images and attachments now.

This happens through normal use of your user account (reading mostly, posting some, spending time in the forums).

Sorry about that! Spam prevention for new users. I hope you saved the content in a text file so you can just copy/paste+attach and be back-in-action.

@Dark_Photon much better now, thanks!

test_run.glb (3.9 MB)
test_run.gltf (20.5 KB)

i did save them but i made small progress over the weekend so prepared content is obsolete anyway.

i changed the way of storing indices from using 4 bytes uint only to 2 bytes ushort mostly & additional 4 bytes uint for complex mesh.

now i have two buffers, both have two views, index view and attribute view.

  • buffer 0: (ushort indices) + (alignment) + (pos + normal, float vec3 interleaving)

  • buffer 1: (uint indices) + (pos + normal, float vec3 interleaving)

file size is a little smaller.

i’m still reading the tutorial section of gltf github repo and trying to make things simple.

so, about scene structure:

  • single scene

  • single root node, no mesh attached, plenty of child nodes on the root node

  • each child node has a mesh attached. each mesh has single mesh primitive.

  • single pbr material for all primitives.

  • no texture or animation

even though my thought on storage type worked, still, i have no clue how buffers are used usually. things like, how many buffers should i use? how does their layout look like? because my first thought back then was “why not just use three buffers for idx, pos, normals. keep them seperated, no concerns on offset, stride.”

is there a rule of thumb among gltf users on how buffer(s) should be used?

Put storage approach aside, i’m also digesting what Don @donmccurdy suggested. bear with me, i’m still familiarizing myself with the concepts here. :joy:

so basically, normal becomes the unique identity of vertices at same location.

i suppose you mean this:
zBEPs4u32d

i realized my vertex normals are actually just face normals. norm_a = norm_b = norm_c = norm_triangle_face. so Va has duplicates just come from two purple triangles. then pointing upwards, two identical vertices produced by blue triangles at a. I should drop one for each pair, correct?

but how? i think when i populate indices and attributes at the same time, i can go like “hey, i met this exact same vertex before, it was the no.9. i’m not going to increment my index count here, put 9 instead and skip the attribute data.”

in the end, i will be looking at an index buffer view “longer” than its corresponding attribute buffer view. is this a problem?

Besides, does identifying “same” vertex require me to revisit all vertices i’ve collected at that moment? or perhaps, i should “filter” the collected ones first before the position and normal comparison? anyway, i feel like the dropping process is going to be slow. what would be an optimal duplicates removal strategy?

i guess they are. i googled what hard edge means in gltf context. so if i understand the term correctly, my assets contain only hard edges, just based on the fact that i use face normal as vertex normal. so, i could forget about the normals entirely? only export indices and postions? would it be just all black if provide no normal data at all?

what is “the viewer” exactly? the renderer? the program reads gltf? “flat normals” = face normals?

i went a little further and found out the concept of averaging normals at a particular location. i suppose it produces “soft edge” and could condense my buffer even further. But i think that is a bit too far for me at the moment.

the tools were amazing. after see the stats immediately i realize some of my meshes were just copies, different merely in terms of translation. i need to look more into it, see if i could put more transformations to reduce copies.

i must say you guys do awesome works. tutorials, toolchains are excellent. i appreciate your help!

1 Like

… i have no clue how buffers are used usually. things like, how many buffers should i use? how does their layout look like? … is there a rule of thumb among gltf users on how buffer(s) should be used?

Most glTF files use only a single “buffer” (in the glTF sense), this is just a container for network transfer of binary data without any particular meaning for GPU layout. The choice of “buffer view” layouts is more subjective. Indices must be in a separate buffer view from vertex attributes. Beyond that it’s up to you. Putting each vertex attribute into a separate buffer view is fine. Interleaving vertex attributes into a shared buffer view could be more efficient for rendering (see: vertex cache locality), but it’s a subtle optimization and different 3D engines may prefer different layouts. Personally I wouldn’t spend a lot of time worrying about it if you don’t have to, tools like gltfpack or gltf-transform can help optimize your buffer views automatically.

what would be an optimal duplicates removal strategy?

There are tools that can do this for you, if you prefer. gltfpack will apply this and many other optimizations. Or glTF-Transform can do just duplicate removal with:

gltf-transform weld input.glb output.glb

If you’d like to learn more about how these optimizations are implemented, c++ - Find unique vertices from a 'triangle-soup' - Stack Overflow should give some ideas. Using a hashmap, or pre-sorting vertices, would be common. The index buffer does not need to reference all vertices in the vertex buffer, e.g. you could have a large vertex buffer with many smaller index buffers pointing into it.

my assets contain only hard edges, just based on the fact that i use face normal as vertex normal. so, i could forget about the normals entirely? only export indices and postions? would it be just all black if provide no normal data at all?

what is “the viewer” exactly? the renderer? the program reads gltf? “flat normals” = face normals?

The glTF specification states that if a mesh does not contain vertex normals, the application opening that file is expected to shade the surface as if it had flat normal / face normals.

So yes, it sounds like you could probably just omit the vertex normals! One exception here would be if you are using a “normal map” on the material, then I think you do need vertex normals.

1 Like

Concrete example — if you have a cube with all hard edges, then you have three choices:

  1. keep hard vertex normals, requiring 6 * 4 = 24 vertices
  2. switch to smooth (averaged) normals, requiring 8 vertices
  3. omit vertex normals, requiring 8 vertices

Aside — other vertex attributes, like UVs (texture coordinates) or vertex colors, might also prevent you from merging vertices.

1 Like

That’s true for WebGL, but that’s not true in general.

1 Like

Thanks, didn’t realize! They must be separate for a glTF file to be valid, however.