VAO is evil

Here is what I find on the OpenGL spec that made me mad:

From OpenGL 3.3 spec, section E.2.2 Removed Features
Client vertex and index arrays - all vertex array attribute and element array index pointers must refer to buffer objects. The default vertex array object (the name zero) is also deprecated. Calling VertexAttribPointer when no buffer object or no vertex array object is bound will generate an INVALID_OPERATION error, as will calling any array drawing command when no vertex array object is bound.

So VAO are not only completely annoying, useless, inefficient (and sometimes imply a cost) but they are also REQUIRED in core specification. So far I was happy to just ignore the it but apparently I can’t. Damn.

(I guess my sugession here is to add a section in the spec: section E.2.3 Evil Features… or update it)

why do you think they are annoying, useless and inefficient? and what cost do you mean?

i am working with them and i am quite happy (also using DSA they are a breeze to use).

I have to dig into the forum archives and pull out the endless debates!

You mean something like this:

I agree, VAOs are a good idea, but their actual implementation is horribly useless.


I follow word for word.
Were you aware than it was required in OpenGL core profile?

This thread is the last example.

Currently, VAOs are just a ‘pretty make up’… and a restriction for all the use cases to use only one case (but a good one) very large buffers with multiple meshes modified by glMapBufferRange. Allocations / reallocations? OWNED. Most renderers which designed they own buffer manager? OWNED. Is the future of OpenGL is removing flexibilities? That’s the only OpenGL feature that chooses this direction, which any benefit.
We end up with 1 VAO per draw call per set of buffers.

Previous threads which deal in one way or another on that topic, that I remember:…true#Post248269…true#Post248281

Is it just a mistake in the appendix? I can’t see any reason the default vertex array object should be deprecated. It’s even mentioned in section 2.10 of the the 4.0 spec:

If a vertex array object that is currently bound is deleted, the binding for that object reverts to zero and the default vertex array becomes current.

Nice finding Dan!
We have a spec error here.
Nice :slight_smile:

i don’t understand what is so horrible with them. Brolingstanz’ post here [1] describes the use case i am using them. just to pack together the state of an vertex input setup (vertex format plus buffers). this way the driver only required to lookup the actual internal vao object instead of multiple buffer ids -> buffer objects (bindless goes further i know).

when mapping and manipulating buffers from an array the array stays valid, so no loss there. could you elaborate where you think the current API is lacking?


Ok, we then debate on how relevant this is or this isn’t and we didn’t really answer.

What is the benefit of VAO? I don’t have any answer for that. On the other side, it implies contains. Contrains vs nothing => bad trade-off.

(I expect to write a proper wish list for future OpenGL spec, I will try to detail this.)

Technically you get a bit cleaner code with it, and theoretically you could get slightly faster rendering if the implementation is good enough, maybe almost as good as when using bindless (though as far as i know it hasn’t been done yet).
If you look at that test VAO actually preforms pretty good above something like 2100 vertics which is right there in the ballpark where you should be with VBOs.

Were you aware than it was required in OpenGL core profile?

If you don’t like VAOs, simply create one at initialization time and never unbind it. Then you never have to worry about it. They’re not immutable, so it’s not like you have to worry about anything.

Good tip!

Now I understand why VAO are so slow. They can’t actually be optimized for this reason.
VAO optimization: A myth.

For what reason?

I think they can, and they can rule. A simple system of IsDirty flags and sets of function-pointers that get latched - is all that’s necessary for the front-end, plus the same for the vram mem-management as core.

(with my x86/arm asm hat on)

What puzzles me, is that drivers didn’t improve VAO performance for so many moons, when the optimizations were easier to be made (before VAO-is-necessary event).

Alright, sound reasonable. Still puzzled too and they must be a reason.