Vertex arrays segmented

I’m the first to admit to being a relative newcomer to OpenGL(about a year of full-time programming).

I would like to see if others feel this is a good idea and even worth considering.

For the record, one of my applications main functions has to do with surfacing image volumes (tiff stacks). I primarily develop on the Macintosh(10.3), but have a little bit of Win32 as well.

I’ve organized my data for use with vertex arrays ,texture arrays, and normal pointers. I primarily render with glDrawElements.

I think there would be some advantages to being able to divy up the single vertex array xyz[](for simplicity I’ve omitted span options) into 3 arrays of x[],y[],z[] and providing an index for each cartesian coordinate element, for every vertex.

a psuedo implementation might look like this:
GLfloat floatsx[]={0,10,20};
GLfloat floatsy[]={0,100,200};
GLfloat floatsz[]={0,1,2};
GLint xpnts[]={0,0,0,1,2,2,2,2,1};
GLint ypnts[]={0,1,0,1,1,1,2,2,2};
GLint zpnts[]={0,1,2,0,1,2,0,1,2};


This might seem completely ridiculous, but there is really a potential for reducing data size for data such as surface volumes where LOTS OF VOXELS LINE UP (atleast when the index requires less bits to store than the coordinate element) It would also make row, column, depth changes lighting fast, if you implemented to do so.

A possible problem might be all the time the GPU must waist looking up the proper value. But in defense of the idea draw elements has a similar implementation.

comments and criticism welcome.

Variations of this theme come up from time to time. The bottom line is that until hardware functions this way, OpenGL won’t either.

You can do that with vertex programs, thanks to vertex attribute arrays. The only problem is that you have to write the corresponding vertex program for every case (lighting, texturing, fog, etc).