Creating vertice and index arrays for VBO


I use VBO’s to create a mesh grid. Everything works fine but I want to optimize my array creation. My code is written in python.

I have this kind of mesh (the number in the corner is the number of the vertice) :

1 ------- 2=4
| ------– |
| ----
---- |
| –------ |
3=5=7 — 6=8
| ------

| ----*
| --*

The array looks like this :

(x1,y1,z1),(x2,y2,z2),(x3,y3,z3) --> triangle 1
(x4,y4,z4),(x5,y5,z5),(x6,y6,z6) --> triangle 2
(x7,y7,z7),(x8,y8,z8),(x9,y9,z9) --> triangle 3
and so on …

Now, I want to create two new arrays from this big one :

  • One vertices array without duplicate vertices (as vertice 2 = vertice 4 and so on …)
  • An index array which will look like this : (1,2,3,2,3,4,3,4,5,….)

So, what I do is to parse the big vertices array with for loops an check at every turn if the vertice is already in my new vertices array, and fill the index array + new vertices array. This step is very slow as I have a big array at the beginning. Are there some algorithms or tricks to speed the things up ?

There is one more thing; the mesh is not full, there are are some “holes” in it with triangles missing. (Just to make it a little bit more complicated :d).

Most people fill these arrays with 5-10 values in their tutorials; but never talk how it would work in the “real” world with a lot of vertices.

I guess the distilled version of you post would be: “I want to draw a regular grid as an indexed triangle mesh.” Correct?

If so, the first “problem” is solved easily. Since every vertex in an indexed vertex array is unique, you simply need to fill your VERTEX array with the coordinates of every grid point.

The second step is to fill the INDEX array with enough and correct indices to have the GL setup your triangles.

To pick up your example above, using the upper two triangles, the vertex and index arrays would look like this:

VA = [(x1, y1, z1), (x2, y2, z2), (x3, y3, z3), (x4, y4, z4)]
IA = [0, 2, 1, 1, 2, 3]

You will notice that there may and most likely must exist duplicate indices but only unique vertices.

Note that the index array sources vertices from the vertex array in counter clockwise order, thus ensuring proper front-facing polygons. Another index array doing the same job would be

IA2 = [1, 0, 2, 3, 1]

For basic usage any vertex order with consistent winding will work. Just make sure you’re creating your index array consistently. For more elaborate projects optimizing the index array to optimally use the hardware post-transform cache is preferable, but I guess that’s way beyond your scope for now.

If your winding is not consistent or if you index array is not correctly sourcing from the vertex array, then what you get is what you percieve as “holes”. Either there really isn’t a triangle then, or the winding is wrong, meaning that the missing triangles are actually back facing, while the rest is correctly front facing.

You should also check out and everything associated with it.


My bad, I forgot to explain why there are some “holes”. It’s because my original vertex array is made of (x,y,z) coordinates calculated from experimental data. So at some places there is no data, which means a “hole” == no vertice == no triangle to display.

My winding should be okay, I have no display errors or problems.

The problem that I encounter is that because of the holes I need to check every vertice through for loops and fill the two arrays.

Example :

Original vertex array : 5250 vertices = 1750 triangles

Cleaned up version with unique vertices : 938 vertices
Cleaned up Index array : 5250 indexes (as I still draw 1750 triangles).

This example array is fast to fill, 0.8 seconds with python.
But I want to use bigger meshes, around 96081 vertices (unique vertices 16326) … there you have to wait 20 seconds to fill the new arrays.

So this isn’t really an OpenGL problem but more about Python runtime performance?

Yes and no.

On one side I am looking to optimize it with python/numpy methods, on the other side it is more a general algorithmic problem, which I thought some people would have stumbled on before, as it is a OpenGL related question.

Filling 96081 vertices into a VBO in a programming language that generates native code (like C/C++) is in worst case a matter of milliseconds. Your performance issue is definitely the result of using an interpreted language.


Still, even with an interpreted language 20s to throw out duplicate vertices is extremely long - and for that matter, so is 800ms for merely 5k vertices.

Could you post the code that is actually setting up the arrays? From where do you get the data? Procedurally or existing data set?

I believe your problem is mostly related to the complexity of the algorithm you’re using.

If for every vertex you do a lookup in an array, you’re doing about N*N lookups (N is the number of vertices). In python, you can easily reduce this by using a ‘dict’ to store the existing vertices.

Take a look at, this may give you a hint.

The result is certainly no as fast as C or C++ code, but may be fast enough for your needs.

I know, python is not the fastest. But I need to work with it because of some compatibility issues.

The data is experimental; Some (long) calculations are done on it. This pre-calculation is done before the OpenGL display, so I don’t need to optimize it further. Next, I use the array of vertices gotten from the calculations and display it. I can’t put the code here because it’s too long.

This is exactly what I need. I did not know about ‘dict’. It’s very fast. I just implemented your technique in my code and the vertice stripping is way faster. I get 0,14 seconds for 123057 vertices, before I had 21,69 seconds … !!!

Yes, now my app runs smoothly. 0,14 seconds is okay for me. Thank you everyone for your help.