VBO to VAO

(Fixed the link) The extension spec says that a VAO contains all state in tables 6.6, 6.7, and 6.8 of the OpenGL 2.1 spec, minus GL_CLIENT_ACTIVE_TEXTURE. This does include GL_ELEMENT_ARRAY_BUFFER_BINDING.

Curiously, it also includes GL_ARRAY_BUFFER_BINDING, which the 3.1 spec explicitly excludes from VAOs (see table 6.5). So there is a slight difference between the extension and 3.1 core.

The speculations and doubts could be reduced a lot if people tried to actually read the spec. VAOs are really not complicated at all, and I don’t know what you think is weird about simply taking a bunch of context state and putting it into a separate state object. It’s been done before with texture objects.

What makes you think nobody wanted it this way?

All these speculations, doubts, the reported neglible (if at all) performance advantage, the limited application possibilities and weird usage make me wonder, why VAO has been promoted to core so quickly and un-asked for.

Nothing you have stated here is true, save perhaps for the performance advantage. Performance is naturally the pervue of the hardware makers. They will optimize VAO when they feel like devoting resources towards doing so.

GL 3.0 is only 9 months old.

Any of the “speculation” and “doubts” you are hearing is due to unfamiliarity with the extension. Its function is explicitly laid down in the specification. VBOs had a similar journey; it took a long time before people understood how binding a buffer object for rendering worked.

Why hasn’t it been availble as extension before to prove its usefulness?

VAO was adapted from an Apple extensions. Coincidentally called APPLE_Vertex_Array_Object.

Thanks for the correction. Sorry I missed that.

So (to correct myself) it seems that a minimalist example for how VAOs would be used if you only needed to swap a texture and change vertex array and index array bindings between batches would be:

  glBindTexture
  glBindVertexArray
  glDrawRangeElements
  ...
  glBindTexture
  glBindVertexArray
  glDrawRangeElements
  ...

(after the VAOs have been built of course).

…which might explain why I didn’t see a perf boost when I tried this before. Similar to how you often don’t see a perf boost with VBOs unless everything’s a VBO. I’ll have to go back and retest that later…

You probably shouldn’t expect a huge performance boost, as you mostly save a few state-changing calls. Unlike using VBOs, where you actually avoid a copy of all vertex data used in a draw call.

But VAOs (and state objects in general) make working with OpenGL more convenient and less error-prone IMO.

Nothing you have stated here is true, save perhaps for the performance advantage.

I’ll try to explain.

  1. The extension spec language for VAO is not very clear. The main purpose is ‘hidden’ in the one small sentence: “BindVertexArray may also be used to bind an existing vertex array object.” There would be less discussion going on here, if the spec was more explicit and elaborate at this point.

  2. I see the limitation in application mainly in the fact that it only supports static attribute/index data. There is no way to change a VAO after it has been created (err, first bound). I don’t know if constantly creating/destroying VAOs is recommended performance wise (in the sense of streaming geometry from disk and dynamically placing them in larger VBOs). I would have liked to see a separation of vertex attribute format/layout and position inside a VBO.

  3. The ‘weird usage’ refers to the fact that a single function does both, storing and restoring state depending on how often it gets called I would have liked to see two separate functions here. The current method resembles GL_COMPILE_AND_EXECUTE for display lists, which has been discouraged to use for a very long time now. Separation of store/restore would also enable to change a VAO several times.

  4. It might be true that VAO had a predecessor as APPLE extension. I don’t have access to Macs, so Mac-only extensions are practically non-existent to me. Neither I heard any “success stories” about APPLE_vao in these forums.

I don’t criticize the fact that there are efforts to make GL faster and better. I just wonder, how quick and unreflected some extensions went straight into core, while others like EXT_texture_filter_anisotropic seem to starve to death.

There would be less discussion going on here, if the spec was more explicit and elaborate at this point.

Specifications are meant for those persons implementing the specification. They are not meant for users of the specification.

While specifications are the only form of real documentation that OpenGL users get, this is not the intended situation.

There is no way to change a VAO after it has been created (err, first bound).

There is no such thing as an immutable OpenGL object. You may modify a VAO as much as you wish.

The ‘weird usage’ refers to the fact that a single function does both, storing and restoring state depending on how often it gets called

Your statement suggests that you do not know how VAOs work. Or that you do not understand how OpenGL objects in general work.

Every OpenGL object is defined in terms of a set of context state data. Binding an object is the equivalent of copying the state stored in the object to the context. This is true for every OpenGL object and every binding action.

The behavior of buffer objects and VAO are not dependent on how often either one gets called.

It might be true that VAO had a predecessor as APPLE extension.

It is true; your awareness of its existence being irrelevant to that truth. The VAO specification was even designed to use the same enumerators, so that there would be compatibility at some level between the two.

Neither I heard any “success stories” about APPLE_vao in these forums.

APPLE_Vertex_Array_Object is 7 years old. That you have neither heard of it nor heard of any “success stories” is again irrelevant.

I just wonder, how quick and unreflected some extensions went straight into core, while others like EXT_texture_filter_anisotropic seem to starve to death.

I have never understood this issue with what happens to be core vs. what is an extension. EXT_texture_filter_anisotropic is more widely implemented than GL 3.0. For practical purposes, you can assume it exists. For comparison, you have to ask for GL 3.0.

A feature being an extension does not mean it is not widely supported, used, or tested. Similarly, a feature being core does not mean that implementations of it are mature or as fast as they could be. VAO being the best example of this.

While specifications are the only form of real documentation that OpenGL users get, this is not the intended situation.

Intended or not, most of the time it is the only source of information.

Your statement suggests that you do not know how VAOs work.

Indeed. I just realized minutes ago :slight_smile: Thanks!

A feature being an extension does not mean it is not widely supported, used, or tested. Similarly, a feature being core does not mean that implementations of it are mature or as fast as they could be. VAO being the best example of this.

In the strict sense, extensions are optional. You can leave them out and still have a compliant implementation. Why do you think that core features do not need maturity? IMHO maturity and usefulness should be the number one reasons why a feature is promoted to core at all.

Do you think VAOs are not mature or not useful, and if so, why?

The comment on maturity was not especially directed at VAO.
A good example of “how its supposed to be” is EXT_fbo. When it came out, the community was asked if it should go to core. The community said no, leave us some time to evaluate. With time, a set of new extensions came out, enhancing FBOs and lifting many restrictions. FBOs matured and culminated into ARB_fbo, which is now in core.
The whole process probably should have taken 2 years less, of course :wink:

The community said no, leave us some time to evaluate.

The community said “no,” because EXT_FBO was not finished. There was missing functionality in the extension as written. It did not go into the core because it was not ready to be core.

The version that was eventually adopted into the core was much more complete.

For VAO, the way it was meant to be used is that you need to create a VAO for every object you need to render with one exception. If you have 2 objects who’s offsets into the VBO are identical, then 1 VAO for both is enough.
I’m not using VAO for the moment. Is there a performance benefit?

And also, the specification isn’t very clear. Are the glEnableClientState part of the VAO or not?

Are the glEnableClientState part of the VAO or not?

glEnableClientState sets the state that says whether a non-generic vertex array is active. That state is part of tables 6.6-6.8. Thus, this state is stored in a VAO.

Eh, I’m not sure what this means, so probably this is one bit (or byte) of understanding that might help me understand how OpenGL works more easily. Please explain, or give links or references to what I should read. Thanks.

It means that binding a buffer object to a specific vertex attribute is a two-step process. You don’t bind the buffer object directly to the attribute. Instead, you bind it to the context bind point GL_ARRAY_BUFFER, then call glVertexAttribPointer to copy that binding to the vertex attribute bind point.

Eh, I’m not sure what this means, so probably this is one bit (or byte) of understanding that might help me understand how OpenGL works more easily. Please explain, or give links or references to what I should read. Thanks.[/QUOTE]

To complement what Xmas said, I believe the term latched state was chosen because of “latches” in electronic circuit design. Take a look at this for more info.

I believe the term latched state was chosen because of “latches” in electronic circuit design.

It was chosen because it was the only way of implementing VBO without making a new set of entrypoints for binding vertex attributes. At the time, there were a lot of vertex attribute binding functions. Also, the ARB felt that it was important that code could easily switch back and forth between VBO and non-VBO functionality.

It means that binding a buffer object to a specific vertex attribute is a two-step process. You don’t bind the buffer object directly to the attribute. Instead, you bind it to the context bind point GL_ARRAY_BUFFER, then call glVertexAttribPointer to copy that binding to the vertex attribute bind point.[/QUOTE]All this would be easier to understand if the terminology was precise. Perhaps the terminology is precise, but I never found, read and internalized the clear definitions.

What exactly is “bind”? I adopted this term too, without being 100% clear what it means in the context of OpenGL. In my mind, it means we call an OpenGL function that assigns one or more values to an OpenGL structure (that structure being part of what we call “OpenGL state”, I believe).

For example, when my code calls glVertexAttribPointer(), I would say this "creates an OpenGL data structure (state) that contains several values (attributeID, datatype, stride, offset, normalization) adds the address of that structure into whichever VBO was most recently made active by glBindBuffer(GL_ARRAY_BUFFER). Presumably the internal VBO structure contains a 16+ element array of attribute pointers which the “attributeID” argument in glVertexAttribPointer() specifies. However, you seem to say calling glVertexAttribPointer() binds the active VBO to the attribute, not binding the attribute to the active VBO as I imagine it. How confusing.

That’s how I currently think about the term “bind”, but some of what I’m reading makes me wonder if I understand this term correctly… or even whether the term has a fully consistent meaning in the context of OpenGL.

Though many of the answers in this thread are helpful, somehow I still am not fully clear about the full functional relationship of VAOs, VBOs, IBOs, and vertex “attributes”. What might be the best way to clear this up for me is to state in pseudocode or word-description of exactly what the glDrawElements() function refers-to and does when it is called. What does it look for first? What in that entities structure/state does it look for, and what does it do with it? Knowing all this should make clear what programs need to do to prepare to call glDrawElements(). This might be the best vehicle to explain the logic behind this wonderful but complex mess (VAO/VBO/IBO/attributes) that makes our programs so fast and convenient, but my mind so confused. Any takers?

All this would be easier to understand if the terminology was precise. Perhaps the terminology is precise, but I never found, read and internalized the clear definitions.

The terminology is precise. People on this forum are not.

Binding only ever happens between an object and the OpenGL context. The term for associating one object with with another object is “attachment”, not binding. Shader objects are attached to program objects. Texture objects are attached to program objects. Buffer Objects are attached to VAOs.

What might be the best way to clear this up for me is to state in pseudocode or word-description of exactly what the glDrawElements() function refers-to and does when it is called.

Very well.

OpenGL is a state machine. Most OpenGL function calls simply set various pieces of that state.

Objects in OpenGL are defined (in most cases) as a specific subset of the full OpenGL state. For example, here is the OpenGL context state for a texture object:


struct TextureObjectState
{
	void *pTextureData;
	InternalFormat eFormat;
	TextureType eType; //1D, 3D, Cube, etc.
	int iWidth;
	int iHeight;
	int iDepth;
	TextureFilter eFilter;
	int iMaxAnisotropy;
};

static TextureObjectState *pContextTextureState = new TextureObjectState();

Conceptually, the implementation of glTexture1D would look like this:


void glTextureImage1D()
{
	DeleteTextureData(pContextTextureState->pTextureData);
	pContextTextureState->pTextureData = AllocateTextureData(iWidth, eFormat);
	pContextTextureState->iWidth = iWidth;
	pContextTextureState->eFormat = eFormat;
	pContextTextureState->eType = GL_TEXTURE_1D;
	if(pData)
	{
		CopyTextureData(pContextTextureState->pTextureData, pData);
	}
}

So, if you call “glBindTexture” with a texture object, what you do is this:


void glBindTexture(TextureObjectState *pNewObj)
{
	pContextTextureState = pNewObj;
}

This leaves out the fact that you the user don’t directly have TextureObjectState objects. You have texture names that map to TextureObjectState objects. If you pass texture name 0, that automatically maps to the default TextureObjectState that was originally stored in pContextTextureState.

It should be obvious that calling any function that modifies texture object state after binding will cause these functions to modify the texture object you provided.

You asked about vertex state. Here’s what that looks like:


struct VertexAttributeState
{
	bool				bIsEnabled = false;
	int					iSize = 4; //This is the number of elements in each attrib, 1-4.
	unsigned int		iStride = 0;
	VertexAttribType	eType = GL_FLOAT;
	bool				bIsNormalized = false;
	bool				bIsIntegral = false;
	void *				pPtrOrBufferObjectOffset = 0;
	BufferObject *		pBufferObj = 0;
};

struct VertexArrayObjectState
{
	BufferObject *pElementArrayBufferObject = NULL;
	VertexAttributeState attributes[MAX_VERTEX_ATTRIB];
}

static VertexArrayObjectState *pContextVAOState = new VertexArrayObjectState();
static BufferObject *pCurrentArrayBuffer = NULL;

Coincidentally, VertexArrayObjectState is the sum total of state that gets stored in a Vertex Array Object. You can find the definition of all of these in the state tables section (6.2) of the OpenGL specification versions 3.1. Older versions have more state in them, but still retain this basic structure.

The remaining question is this: how do you set this state?


void glBindBuffer(enum target, uint buffer)
{
	BufferObject *pBuffer = ConvNameToBufferObj(buffer);
	
	switch(target)
	{
	case GL_ARRAY_BUFFER:
		pCurrentArrayBuffer = pBuffer;
		break;
	case GL_ELEMENT_ARRAY_BUFFER:
		pContextVAOState->pElementArrayBufferObject = pBuffer;
		break;
	...
	}
}

void glEnableVertexAttribArray(uint index)
{
	pContextVAOState->attributes[index].bIsEnabled = true;
}

void glDisableVertexAttribArray(uint index)
{
	pContextVAOState->attributes[index].bIsEnabled = false;
}

void glVertexAttribPointer(uint index, int size, enum type, boolean normalized, sizei stride, const void *pointer)
{
	VertexAttributeState &currAttrib = pContextVAOState->attributes[index];

	bool				bIsEnabled = false;
	int					iSize = 4; //This is the number of elements in each attrib, 1-4.
	unsigned int		iStride = 0;
	VertexAttribType	eType = GL_FLOAT;
	bool				bIsNormalized = false;
	bool				bIsIntegral = false;
	void *				pPtrOrBufferObjectOffset = 0;
	BufferObject *		pBufferObj = 0;

	currAttrib.iSize = size;
	currAttrib.eType = type;
	currAttrib.iStride = stride;
	currAttrib.bIsNormalized = normalized;
	currAttrib.bIsIntegral = true;
	currAttrib.pPtrOrBufferObjectOffset = pointer;
	currAttrib.pBufferObj = pCurrentArrayBuffer;
}

This works just like any other state object. Allocated Vertex Array Objects will have their state set by these functions.

The last line of glVertexAttribPointer is the critical association between the Buffer Object and the Vertex Array Object. It uses whatever object is in pCurrentArrayBuffer (part of the OpenGL context). We saw above that this is set by glBindBuffer(GL_ARRAY_BUFFER).

Aside: you can see how stressed and strained the OpenGL object model is when you look at things like FBO and shader/program objects. When you bind an FBO to the context, it actually changes the state vector for the framebuffer. Shader and program objects don’t even use the context, except to be rendered. You set other objects directly into them.

glDrawElements is one of those functions that does not change OpenGL state. It is a rendering command. How it renders is dependent on the current state of OpenGL. All you need to do is set up the current OpenGL state to be what you need it to be and call glDrawElements.

And pre-VAO, buffer objects are bound to the context. That’s still reflected in the fact that the get values for the VAO attachment points are called GL_ELEMENT_ARRAY_BUFFER_BINDING and GL_VERTEX_ATTRIB_ARRAY_BUFFER_BINDING, despite having turned into attachment points.

“That’s still reflected in the fact” … “despite having turned into attachment points.”

Alfonse has already written words to enlighten mortals, illuminate complexity, and display how much we all really want a different paradigm (whether we yet know it or not), why I can but copy them:

“you can see how stressed and strained the OpenGL object model is”…

Indeed parts in the API, and the model, are overloaded from historical meanings - often in attempts to not increase the number of identifiers or perceived complexity even further.

<mode=rant?>
Judging by 3.0, I can’t say I’m too optimistic about the future. But still, I hope for the day when 1.x/2.x is but an open source layer you can d/l and compile should you want it - using a completely new (modularized) API - an OpenGL TNG.

Where at least software people from competing companies were allowed to work together, and openly, to design the best damn 3D API we could. No NDA’s, no bullshit, just interface designers/architects/hackers/<whatever_you_like> doing what they do best.

With version nine MS actually managed to create something both useful and reasonably long-lived (for MS) - D3D9. The ARB with its many competent members should be able to create something better, if it managed to make its members collaborate. Not only because such a group would be greater than the sum of its parts, but to take the future of their lives in their own hands - or are the gfx vendors of today happy to be in the hands of a single software vendor from Seattle?

Sorry this became way off-topic.
</mode>