What would be the best VertexObject class?

I’m now writing the 3rd revision of my vertexobject class - its a class that deals with the creation and rendering of vertex arrays.
It’s the 3rd revision, because each time I think I’ve got it right, flexible and expandable, I find new extensions that make it neccessary to modify the code in an ugly way to get it to deal with the new methods. The latest such extension that caused me to have to rethink the way I construct and render vertex arrays is the Vertex Program extension. Now, instead of having member variables (arrays) for vertices/normals/texcoords/colours etc., I now have to have open-ended (generic) attribute arrays.
Now, this is a good thing, I suppose - because now I can have an array of arrays, with an enum of some kind to selected and populate them.
Any other considerations I should have?
Has anyone been through the same number of revisions as me?

It’s a horrible, horrible, horrible design problem. Last time I tried to thrash out the “right” answer it killed my progress stone dead for months on end; complete analysis paralysis.

The key to good OO design is “encapsulate the concept that varies”. Unfortunately, with vertex arrays, sodding EVERYTHING varies:

  1. What “aspects” do you want your verts to have? Position, OK, but 2, 3 or 4 dimensions? Maybe normals. Maybe texcoords; again, in 1/2/3/4 dimensions. Maybe multiple sets of texcoords. Maybe colours, again, 3/4/packed, maybe secondary too. Fog coords. Blending weights. Arbitrary vertex program input. It never ends.

  2. Types. Float? Double? Short?

  3. Layout. Sequential arrays or interleaved? If interleaved, what ordering? If sequential, are they contiguous or separate? (For a skinned model you might have dynamic vert positions and normals but fixed texcoords, in which case you might want to store them in different places.)

  4. Reuse. You’ll certainly want to share vert arrays as a whole between multiple scene elements. Do you also want to share selected parts? (Think skinned models again; each instance has its own position/normal data, but they could share texcoords.)

  5. Index arrays - store these inside the vertex array object, or outside? Same with group information (e.g. group 1: TRIANGLE_STRIP, 540 verts, group 2: TRIANGLES, 310 verts…)

  6. How to express read-only or completely opaque data in the class interface?

  7. What formats does the API support? What formats is the hardware optimized for?

And just when you think you’re starting to get a handle on the problem, along come those bleedin’ GL2.0 ragged vertex arrays…

Bottom line: I was being stupid, trying for more generality than was either required or possible. Decide what you’re going to want to do, which APIs you want to support, where you need to have flexibility, and go from there.

The way I’m going at the moment (and I’m not suggesting for an instant that this is the right or even a good way to do it) is roughly as follows:

Assume consistent use of floats for all vertex data. (Might make an exception for colour, not sure yet.)

To get a vertex object, client code first specifies a format object to say what they want, via enums. e.g. I want a POSITION aspect with datatype FLOAT3, and a TEXCOORD aspect with datatype FLOAT2. The format is passed to a polymorphic Renderer interface; the concrete subclass (e.g. OpenGL12Renderer or OpenGL20Renderer or D3D8Renderer or GeForce3GLRenderer or whatever) can customize the returned vertex object if it prefers a particular format. There’s generic support code to allocate a chunk of raw memory and set up pointers and strides and refcounts and stuff for the various requested aspects.

The format of the object is pretty much hidden from client code. To access the vertex data, clients use strongly-typed accessors with names like beginVector2(POSITION), where POSITION could be any aspect. The accessor checks that the aspect exists in the object and is stored as a datatype compatible with Vector2, and returns an iterator object which is basically just a templated wrapper around a raw pointer and a stride count. The client can then use this iterator like a normal pointer - operator[], operator++ etc - without knowing anything about whether the data is sequential or interleaved, etc. Similarly, if the renderer didn’t like Vector2 formats for position, it could allocate the position aspect as Vector3 and just zero out the Z coord before returning the object; the client would never know.

It’s overkill, but I haven’t done enough with it yet to say just how badly it’s overkill. I don’t think it need be harmful; the iterator access will be a bit slower than raw pointers, but not much, and if as I suspect sequential arrays win out over interleaved in the not too distant future, the iterator class can just be cut down to a wrapped raw pointer, giving full speed without needing any changes to client code.

I’m worried about the need for hints when writing dynamic geometry. If I ever want to write directly to AGP memory, “details” like sequential vs interleaved data suddenly become very important.


You see what happens when you start to take this stuff too seriously. Premature generalization is every bit as deadly as premature optimization! Get the simplest code you can that meets your needs, and whatever else you do, DOCUMENT YOUR DESIGN DECISIONS AND THE REASONS BEHIND THEM! Then, when you do need to extend your class, you’ll have a nice clean base to build on, and you won’t waste time on dead-ends you already explored last time.

I have wrapped VAR and VAO in an interface that is very similar to VAO interface

To make it work with VAR, you need to do some housekeeping, but it is very mininal.

It works great. It’s very flexible because it is type independant and data independant. It just stores bytes. It let higher level objects worry about what they mean which is what you want!

[This message has been edited by Gorg (edited 02-04-2002).]

I’m glad to see I’m not the only one who has had to go through this decision making process, only to come out the other end with an uneasy feeling like it’s not the best way…

– Zeno

Yes designing is not a trivial task. But in the end the only good solutions are the one that suits your need.

So actually finding what you want might be the most difficult thing to do

And what about shaders?
At the moment, I have a vertex object, I populate it with various arrays, then pass this object into an instance of a Shader class, which then sets up the states for rendering.
This gives me problems, in that it’s up to the creator of the vertex object to create the appropriate arrays for the shader to use (e.g. some vertex programs (which are set up and enabled in the Shader class) need certain data to be in certain attribute arrays).
I’m sure it should be up to the shader class to configure the vertex object instance, but this limits the ability of any external function from controlling the contents of the vertex object instance…it’s getting confusing even trying to explain the dilema.
Ah well, I’m sure I’ll find a nice way eventually, but with all the other work I’m having to do to keep up with projects on the go, it’s getting hard to think clearly…but I won’t bore you with my time-management problems…I just wish I was on a desert island with nothing but the sounds of birds fluttering around to give me time and peace to come up with a nice design…or maybe go on the dole for a while… You GL hobbiests probably have better opportunities to come up with nice code…

well, in your vertex shader interface, when you set the mesh, you need to specifiy which buffer is for vertices, which for texcoords, which is for normals and which is application dependant.

I use attribute flags for each kind of data that can fit in my vertex arrays, and I can use more than one array in a “mesh”. The attributes are defined as a bit mask telling me how many components, and the data type (with one data type meaning “not present”). Something like this:

#define AttributeBitsSize 4
#define PosAttributeShift 0
#define NormalAttributeShift 4
#define ColorAttributeShift 8
#define Tex0AttributeShift 12
#define Tex1AttributeShift 16
#define UserAttributeShift(x) (20+(x)*4)

inline uint attributeBits( int count, GLenum type ) {
assert( type == GL_UNSIGNED_CHAR | | type == GL_FLOAT | | type == GL_SHORT );
assert( count >= 0 && count <= 4 );
if( !count ) return 0;
uint typeCode;
switch( type ) {
typeCode = 1<<2; break;
case GL_SHORT:
typeCode = 2<<2; break;
case GL_FLOAT:
typeCode = 3<<2; break;
return typeCode | (count-1);

As I pack each field descriptor for an array into 4 bits, I can fit 8 different fields into an array, each of which can be uchar, short or float, 1-4 elements, or not present.

To set up the array pointers, I just “walk” each nybble in the descriptor, testing whether the field is there or not, and if it is, extracting the count/type, calling the appropriate ArrayPointer() function, setting a flag to remember to enable the clientstate, and updating my notion of field offset for the next field (if any).

While I hard-code the order for the first 5 arrays to the common kinds of vertex streams, you don’t have to do that; you can just treat the uint as containing 8 arbitrary field descriptors. Of course, your tools will have to emit the appropriate field descriptor when exporting the mesh data you’re going to be stuffing into this guy, too.

I allow more than one array in my meshes, so the vertex/normal data could come from one place, and the color/texture data could come from another. I also use the uppermost nybble to express padding/alignment restrictions, as some vertex formats need 32-byte alignment within the array because of the code I’ll be running on them.

Note that it’s often the case that “generality” and “efficiency” are contradictory, and with real-time graphics, you might need to design for efficiency rather than generality. That doesn’t mean you’re a poor designer; it means you’re a good engineer who knows how and where to make trade-offs.

That is so weird, Jwatte, because that is exactly the design I’ve been toying with. Thanks for your input.