vertex arrays

I’m having some problems with storing my data in a way that works with gl*Pointer() and glDrawElements().

Is this the correct format?

float *Vertices;
float *Normals;
short *Indices;

Vertex = malloc(sizeof(float) * n_Vertices * 3);

Normals = malloc(sizeof(float) * n_Normals * 3);

Indices = malloc(sizeof(short) * n_Indices * 3);

Seems alright to me… I’d prefer to use vector structures myself through.

typedef struct
{
float x, y, z;
}S_Vector3;

S_Vector3 *vertices, *normals;
u16 *indices;

or something like that!

Nutty

can a struct like that be passed as an array to gl*Pointer()? I thought it was only arrays… or is that struct basically a float vertex[3] array internally?

Your formula is basically correct, but I wanted to point out one extra little detail. You will need to make sure that n_Vertices == n_Normals and that there is a 1 to 1 relationship between the two arrays.

That is… The normal used for Vertices[n] will be Normals[n] where n can be anything from 0 to n_Vertices - 1.

[This message has been edited by Deiussum (edited 06-10-2001).]

As to the struct being passed to the function as an array, there is nothing wrong with that. You may have to do some extra casting to keep the compiler from complaining, but internally, the memory is stored exactly the same between the struct and float array.

For instance if you were to use the struct you would need to do casing like so…

struct vertex
{
float x,y,z;
};

vertex Verts[NUM_VERTS]; // can also be dynamically allocated

glVertexPointer(3, GL_FLOAT, 0, (float*)Verts);

And actually… I just remembered that glVertexPointer uses a GLvoid* for the last parameter, so that casting is probably not even necessary and I typed all that out for no real reason.

[This message has been edited by Deiussum (edited 06-10-2001).]

Thanks, I appreciate it. I’d rather use the struct, working with a raw array is a pain in the ass.

Ok, I’ve switched to structs and glDrawElements still doesnt work. Heres what I have:

<pre>
typedef struct __VECTOR
{
float x, y, z;
} VECTOR, VERTEX, NORMAL;

typedef struct __TEXCOORD
{
float u, v;
} TEXCOORD;
</pre>

I read everything in and if I use a for loop like so:

<pre>
for (ii = 0; ii < Object->Mesh[i].FaceCount; ii++)
{
glBegin(GL_TRIANGLES);
glTexCoord2f(Object->Mesh[i].TexCoord[Object->Mesh[i].Face[ii].i1].u, Object->Mesh[i].TexCoord[Object->Mesh[i].Face[ii].i1].v);
glNormal3f(Object->Mesh[i].Normal[Object->Mesh[i].Face[ii].i1].x, Object->Mesh[i].Normal[Object->Mesh[i].Face[ii].i1].y, Object->Mesh[i].Normal[Object->Mesh[i].Face[ii].i1].z);
glVertex3f(Object->Mesh[i].Vertex[Object->Mesh[i].Face[ii].i1].x, Object->Mesh[i].Vertex[Object->Mesh[i].Face[ii].i1].y, Object->Mesh[i].Vertex[Object->Mesh[i].Face[ii].i1].z);

			glTexCoord2f(Object-&gt;Mesh[i].TexCoord[Object-&gt;Mesh[i].Face[ii].i2].u, Object-&gt;Mesh[i].TexCoord[Object-&gt;Mesh[i].Face[ii].i2].v);
			glNormal3f(Object-&gt;Mesh[i].Normal[Object-&gt;Mesh[i].Face[ii].i2].x, Object-&gt;Mesh[i].Normal[Object-&gt;Mesh[i].Face[ii].i2].y, Object-&gt;Mesh[i].Normal[Object-&gt;Mesh[i].Face[ii].i2].z);
			glVertex3f(Object-&gt;Mesh[i].Vertex[Object-&gt;Mesh[i].Face[ii].i2].x, Object-&gt;Mesh[i].Vertex[Object-&gt;Mesh[i].Face[ii].i2].y, Object-&gt;Mesh[i].Vertex[Object-&gt;Mesh[i].Face[ii].i2].z);

			glTexCoord2f(Object-&gt;Mesh[i].TexCoord[Object-&gt;Mesh[i].Face[ii].i3].u, Object-&gt;Mesh[i].TexCoord[Object-&gt;Mesh[i].Face[ii].i3].v);
			glNormal3f(Object-&gt;Mesh[i].Normal[Object-&gt;Mesh[i].Face[ii].i3].x, Object-&gt;Mesh[i].Normal[Object-&gt;Mesh[i].Face[ii].i3].y, Object-&gt;Mesh[i].Normal[Object-&gt;Mesh[i].Face[ii].i3].z);
			glVertex3f(Object-&gt;Mesh[i].Vertex[Object-&gt;Mesh[i].Face[ii].i3].x, Object-&gt;Mesh[i].Vertex[Object-&gt;Mesh[i].Face[ii].i3].y, Object-&gt;Mesh[i].Vertex[Object-&gt;Mesh[i].Face[ii].i3].z);
		glEnd();
	}

</pre>

Each index in the Face struct is used for vertices, normals, and texcoords, but using

<pre>
glVertexPointer(3, GL_FLOAT, 0, Object->Mesh[i].Vertex);
glNormalPointer(GL_FLOAT, 0, Object->Mesh[i].Normal);

	glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_SHORT, Object-&gt;Mesh[i].Face);

</pre>

gives me nothing, maybe I’m using these functions wrong, but I can’t see a problem.

From the MSDN spec on glDrawElements:

void glDrawElements(
GLenum mode,
GLsizei count,
GLenum type,
const GLvoid *indices
);

Parameters
mode
The kind of primitives to render. It can assume one of the following symbolic values: GL_POINTS, GL_LINE_STRIP, GL_LINE_LOOP, GL_LINES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_TRIANGLES, GL_QUAD_STRIP, GL_QUADS, and GL_POLYGON.

count
The number of elements to be rendered.

type
The type of the values in indices. Must be one of GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT, or GL_UNSIGNED_INT.

indices
A pointer to the location where the indices are stored.

You might want “count” to be (Object->Mesh[i].FaceCount * 3). Unless you’re just rendering one polygon.

Also, you forgot to set up your texture coordinate pointer.

glDrawElements should be.

glDrawElements(GL_TRIANGLES, 3 * Object->Mesh[i].FaceCount, GL_UNSIGNED_SHORT, Object->Mesh[i].Face);

Nutty

Edit:

damn… pipped at the post by Korval!

[This message has been edited by Nutty (edited 06-10-2001).]

Ahh, I didn’t know the second param was the total number of faces to be rendered, the SuperBible and MSDN didn’t explain it too clearly, or I had my head up my ass. Thanks so much guys.

outrider… you are right, the superbible does a terrible job of explaining that function! in fact… they’re wrong if you look at their definitions…

One other thing that hasn’t been mentioned yet that you may or may not be doing already, is enabling the vertex arrays with glEnableClientState.

glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);

I prefer to use unsigned ints for my indexes, just depends on how complex your models are though.