Identifying adjacent triangles??


I have an array of triangle vertices (many thousand) which draw the surface of a head, looping around the shape a line at a time.

I have calculated the normals for each triangle, but to implement smooth shading I now need to calculate the normals of each vertex. I understand that I can take the average normal of the surrounding triangles to do this, but how can I work out which are the surrounding triangles???

Does anyone know an easy (and efficient)way of working this out???



Take some triangle. Take it’s first vertex. Loop through all other triangles. Check wether they have the same vertex.

This algo is quite slow, but any other would need some geometrical assumption I think. And I didn’t understand what it means that the triangles loop around the shape a line at a time.

What I mean is the triangles are listed in an array (I’m reading the data in from a data file given to me), starting at the top of the head and looping around, and then the row below that until the neck is reached.

This is centered around the origin, and most triangles face outward, but there are a few exceptions like the triangles representing the ear canals and nostrils.

I was hoping for some mathematical trickery that would allow me to identify adjacent triangles without looping through every vertex point.

Your reply is appreciated,


Is there any guarantee that all two adjacent triangle pairs will form a quad?

Maybe it would help if you mailed me a pic of the triangulization.

Michael is right. To triangles share a vertex if:
triangle1.vertex1 == triangle2.vertex1
triangle1.vertex1 == triangle2.vertex2
triangle1.vertex1 == triangle2.vertex3
triangle1.vertex2 == triangle2.vertex1
triangle1.vertex2 == triangle2.vertex2
triangle1.vertex2 == triangle2.vertex3
triangle1.vertex3 == triangle2.vertex1
triangle1.vertex3 == triangle2.vertex2
triangle1.vertex3 == triangle2.vertex3

Where, by saying
triangle1.vertex3 == triangle2.vertex3
I mean
(triangle1.vertex3.x == triangle2.vertex3.x
triangle1.vertex3.y == triangle2.vertex3.y
triangle1.vertex3.z == triangle2.vertex3.z)

Now, if your vertices are not 100% preciselly defined, you may replace the == with and “aproximatelly equal” funtion of yours.

And yes, it’s slow, but does the right thing. Optimizing will depend on how you have the data arranged, as michael also said.

Depending on your application and how often you are performing these sorts of adjacency tests, it might be to your advantage to store the data in a data structure which reflects the topology of the data. Look around for info on Winged-edge, half-edge, and other similar data structures.

Initially cramming your data into a structure like this takes a little work, but once you do, operations like finding average normals and adjacent faces become trivial.

Thanks for the advice, you’ve all given me some things to think about.


Are your triangles defined with indices to an array of vertices or is each triangle a set of three explicit points ???

In the first case, you can speed up the smoothing algorithm a bit…
This assumes that you will need only ONE normal per vertex. I say that because you sometimes want more than one normal per vertex (i.e. if there is a break in the geometry, you do not always want to smooth-shade).

Now, to the optimization:

  1. Calculate all the faces normals (which you already do as far as I understand !). OPTIONAL: You might as well calculate the area of each face if you want to take into account the triangle surface when averaging the normals.

  2. Create an array of normals of size No vertices i.e.:

C_Normal *Normals=new C_Normal[iNoVertices];

where C_Normal is a vector…

  1. Initialize all the normals to (0,0,0)

  2. Loop through all triangles and do:




Where pTriangle is a pointer to the current triangle and V1, V2, V3 are the indices of the vertices that make the triangle.
And Normal is the triangle normal you calculated in 1), Area is the triangle area you calculated in 1).

  1. Loop through the normals array and normalize each normal.

You now have your normals for each vertex…

This is much faster than looping through each vertex for each vertex (i.e. sqr(No Vertices)) but, as I said, it assumes one normal per vertex only…

You can also modify this algorithm to have more than one normal per vertex but it could lead to a deep structure change…

In the second case, I have an (optimized, I think !) algorithm to bring you back to the first case… As it is a bit long to explain, I will wait until I know whether it would be useful to you or not !

Hope this helps !



[This message has been edited by Eric (edited 01-25-2001).]

Great Eric. If you can make that assumption, then thats the way to go.
Your aproach wont work for me because I cannot asume that things. I read files from .3ds files, where I have to calc normals according to smoothing groups, and you can have more than one normal per vertex.

Also, I think that for vertex normal calculations speed is not critical, since its usually done only once. I loade the .3ds, calc the normals and the strips (which is slow too), and then store the precalced scene in my propietary format.

So I advice you to do the normals correctly, and dont spend much time optimizing them, because that time will be better spended creating a propietary format, optimizing rendering code, etc.

[This message has been edited by coco (edited 01-25-2001).]

I must say (I forgot to do it in my post !) that I agree with Coco: these things are usually done once for all so you should not spend too much time trying to optimize them (actually, there are not that many possible optimizations !).

Of course, if your main task is to import objects, you might want to optimize the normals calculation as much as possible.

Don’t laugh at this idea: one my program is a CFD post-processor and it is not rare for my colleagues to import their model to check it, then go back to the modeller to fix the modelling errors, and import again… leading to a lot of import during one single day ! That’s why I did a bit research on optimizing the calcs (we sometimes deal with very, VERY large models here…).



Thats perfect.
“Don’t laugh at this idea”
Hope you didn’t get me wrong… I guess I was thinking in more specific cases like game engines, vr apps, etc.

Originally posted by coco:
Hope you didn’t get me wrong.

No, I didn’t !

As a lot of (most ?) people here are using OpenGL for games/demos, I thought I’d give an example where the usual techniques for this type of app can prove less useful…

Actually, I am doing both here (this CFD post-processor but also a 3D modeller/viewer and a simulator of people movements !) : I get to know the tricks of both world !