You can jump to a different model/frame/LOD with a single BindVertexBuffer call, rather than having to specify individual VAOs for each model/frame/LOD, or respecify the full set of VertexAttribPointer calls for each model/frame/LOD (worth noting that stride is not really a big deal here and is unlikely to change in real-world programs; offset is the important one).
Welcome to my entire point: if the stride isn’t going to change in any real-world system, why is the stride not part of the vertex format?
There’s a reason I gave it the “one little mistake” award: the only thing wrong with the functionality is that the stride is in the wrong place for no real reason. Or at least, the only reason is because “Direct3D does it that way.” It doesn’t actually make sense; that’s just how they do it.
the only seeming bad part being (and I haven’t fully reviewed the spec so I may have missed something) that BufferData/BufferSubData/MapBuffer/MapBufferRange haven’t been updated to take advantage of the new binding points.
There aren’t new binding points. glBindVertexBuffer does not bind the buffer to a binding target the way that glBindBufferRange does. That’s why it doesn’t take a “target” enum. It only binds it to an indexed vertex buffer binding point; it doesn’t bind the buffer to a modifiable target.
This was almost certainly done to allow glVertexAttribPointer to be defined entirely in terms of the new API. glVertexAttribPointer doesn’t change GL_ARRAY_BUFFER’s binding, nor does it change any other previously-visible buffer binding state. Therefore, glBindVertexBuffer doesn’t either.
Personally, I don’t have a problem with the minor API inconsistency.