While debugging my glsl importer that was using a tightly packed buffer, i found out that, in contrast to other api calls;
glvertexarrayvertexbuffer (/glBindVertexBuffer()) does not assume a stride of 0 to refer to tightly packed buffers. Instead it literally seems to take a stride of 0 and repeats the first element.
Is there any reason for this or is this an oversight? Due to this it seems i cannot reuse the same binding for different attributes using the same buffer, as their tight packing can entail different strides due to their size differences.
Apologies if i missed anything.
glBindVertexBuffer doesn’t know the size of the elements, so it can’t convert a stride of zero to an actual stride.
It’s intentional. Calling
glVertexAttribPointer with a stride of zero will calculate the effective stride at the time of the call. It can do this because it has the format information available;
glBindVertexBuffer doesn’t, and
glVertexAttribFormat doesn’t modify the binding point. You can have multiple attributes using the same binding point, but they’ll all have the same stride (this is how you’d typically handle array-of-structures vertex data).
Similarly, the attribute divisor is a property of the binding point. Essentially, the binding point state contains the information required to supply a block of memory for each vertex, the attribute state contains the information needed to extract the data for a specific attribute from its associated memory block.
You can bind the same buffer to multiple binding points if you have multiple attribute arrays (as opposed to a single array with multiple attributes per element) within a single buffer.
I see, thank you,
In that regard, is the use of
glVertexAttribPointer comparable to having separate bindings for each attribute, as opposed to using
glBindVertexBuffer, where they can be shared?
Yes and no.
In terms of the OpenGL specification, yes: each attribute index has a separate binding index when using
glVertexAttribPointer. In the 4.3+ GL specification,
glVertexAttribPointer is defined in terms of
But in terms of how implementations implement it, no. Implementations have always been able to figure out if you’re using interleaved attributes within the same region of a buffer and therefore only use one internal buffer binding point for multiple attributes.
So you’re not really losing anything by using the old API. Though you really shouldn’t; it’s just a terrible API.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.