The problem (and the reason for my question) is this:

start with a simple quad. The vertices of which are:

v1: 1, 1, 0

v2: -1, 1, 0

v3: -1, -1, 0

v4: 1, -1, 0

At each vertex, the normal would be:

0 , 0 , 1

With vertex arrays, you would have to copy that normal 4 times in order to get them to work.

Now consider a cube:

Each corner is a vertex. So there are eight vertices. There are six faces and a single normal corresponds to each face.

To take that a stage further, lets say we want to apply a texture to each face. We could use 2d texture co-ords which would give use a total of 4 tex_coords for this object. (they are the same for each face)

With vertex arrays instead of having

8 vertices

6 normals

4 tex_coords

We would have

24 vertices

24 normals

24 tex coords

Now think of an object with 1000+ polys and you start to see the problem.

My actual problem goes beyond that…

I’m doing keyframed animation, all the vertices and normals have to be stored for each keyframe. So, say in total I load up 50 key frames for all the characters movements, Thats one hell of a lot more data that I have to store.

But data storage isn’t the only problem, say I have a model with 1300 vertices ( & normals) which has about 1000 faces. I need to then interpolate each vertex and normal in order to get the animation to work. This has to be done every frame.

If I convert that to vertex arrays, then I’m talking about performing calculations on 6000 to 8000 vertices and normals (as opposed to 2500).

Thats up to 5500 more calculations per frame!!!

Now add enemy multiple characters, backgrounds etc and it all starts to get messy.

I hope you see my point.

An additional question:

If anyone has had experience with this…

If i wished to do my keyframing, would it be better to convert all the data in my animation frames to the vertex array ‘style’ and make interpolation easier, or keep them as they are, and each frame work out which vertex and normal belongs where?