Understanding glDrawElements()

Hello everyone!

I should clarify… I understand the concept going on with the function. You save memory by storing vertices one time and reuse them by referencing them by index with an index array. I have an example I’m working off of that just uses colors and vertices, and everything is fine.

I want to add Normals and Texture Coordinates, and the idea behind the indexing seems to be falling apart… I’d rather save memory if at all possible and use this over glDrawArrays(), but I can’t seem to overcome this hurdle.

Just considering Normals, lets say I have two triangles that make up 1 square. I have only 4 vertices, and (feel like I) should be able to get away with re-using 1 single Normal over and over for each for them, just like I can re-use the two vertices on the shared edge. Now, if I make a cube and want to put the same texture area over all of the faces, I should be able to reuse those 6 coordinate pairs…

Am I missing something in the functionality? Since I’m using vertices, normals, and textures… would I just be better off sacrificing the memory and going to flat arrays that repeat certain values over and over? Is there a magic function that accepts a separate indices for vertices, normals, and texture coordinates?

Thanks everyone, I hope I can get this straightened out…
-Jeremiah

Is there a magic function that accepts a separate indices for vertices, normals, and texture coordinates?

No, there isn’t. A vertex (which here means a set of attribute values such as position, normal, texture coordinate, etc.) can be re-used as long as all attributes are continuous, i.e. as long as it is part of a “smooth” surface. Such vertices are generally a lot more common than those on hard edges or seams, so a single index value usually works well. Duplicating a few vertices may turn out cheaper than having multiple indices per vertex as well. A cube is an obvious counter-example, but luckily it doesn’t consume that much memory anyway.

Also, indices are not just used to save memory, they are also used by hardware to identify vertices which have already been processed and may be stored in a post-transform vertex cache. Multiple indices would make this significantly more expensive.

Thanks for the quick reply, this does help me out. Treating the vertex as a set of all of it’s components makes sense.

I’m working on converting a model from Blender -> obj -> binary data (using a custom C command line app) to be loaded into memory through i/o (on Android).

I did notice a post from someone here that had done something similar (after I’d already almost finished my obj file parser) and they did mention needing to repeat vertex coordinates when the texture coordinates differed on a connecting face.

I’m still having a bit of a hard time picturing how, when using a vertex, normal, and texture coordinate, I can save much memory. I may be lucky enough to have a particular point share the same tex coord (say the center point of a regular polygon made of triangles)… but if I pull that center point out (perpendicular to the plane formed by the outside points) so that all of the normals for the faces are now different… I have to repeat that vertex and texture coordinate 6 times and create a new index for each, correct?

Very sorry for the double post… but the board isn’t allowing me to edit.

I’d like to add one more question, as I’m going to try out a glDrawArray() implementation first.

glNormalPointer() does not have a size parameter like the vertex and texture pointer functions. Is it simply assumed that a Normal has only 3 coordinate elements? I suppose that would make the most sense… sorry if this sounds like an odd question. Being a hobby programmer I pick up bits and pieces sometimes in a hard to learn order…

With smooth surfaces, vertex normals are not the same as face normals, but an average of the normals of all adjacent faces. Thus unless you want edges with hard transitions, you can share those vertices between the adjacent faces.

Yes, a normal in OpenGL is implicitly 3D.

Ahhh, now that helps tremendously. Ultimately, the models I’m working with should end up smooth so I can see how a single point will easily have the same position, normal and texture coordinates. This is wonderful news.

Now I just have to optimize the obj to binary converter. I had my first successful (partially) read from a binary file of vertices to creating an model on Android today, so things are moving along nicely.

Thanks for your all your help. I’m sure I’ll be back soon.