Clarity of accessor offset and glVertexArrayAttribFormat

I just started a gltf loader. I thought I understood the design until I encountered large accessor offsets.

The mental model I had was:

  1. glTF buffer = OpenGL buffer.
  2. glTF bufferview = OpenGL bindingindex (eg glVertexArrayVertexBuffer).
  3. glTF accessor = OpenGL attribindex.

My workflow is this:

  1. For each gltf buffer, glCreateBuffers a GL buffer and load it up completely.
  2. For each primitive in each mesh:
    a. Create a VAO.
    b. glVertexArrayElementBuffer the buffer associated with the indices’ accessor’s buffer.
    c. For each vertex attribute in the primitive:
    i. Associate each bufferView referenced by an attribute accessor with glVertexArrayVertexBuffer. This sets the view->offset and view->stride.
    ii. glEnableVertexArrayAttrib the attribute index.
    iii. glVertexArrayAttribBinding to associate the bindingindex of the bufferView from (i) with the attribindex from (ii).
    iv. glVertexArrayAttribFormat to define the vertex attribute’s type, size and relative offset from the attribute’s accessor.

I was expecting accessor’s offset to be less than the stride of its bufferView, but it’s not. It includes large linear offsets:
“bufferView”: 2,
“byteOffset”: 39276,
“componentType”: 5126,
“count”: 3273,
“max”: [
“min”: [
“type”: “VEC3”

There’s one bufferView per stride, and accessors are linearly separated into those, not interleaved. OpenGL gives an "Invalid " error in the glVertexArrayAttribFormat with the large offset.

How am I supposed to interpret these data structures, and what’s the point of bufferView? Seems like it may have no special OpenGL treatment and I should just fold its offset/stride into the accessor.

The primitive loading attempt is here

This is kind of a weird case that glTF exporters can create. If a view is used only by one accessor, then the byteOffset in the accessor isn’t really important. That is, since data is not being interleaved, the accessor’s byteOffset is really just a component of the view’s byteOffset. For non-interleaved cases, the exporter should put all of the offset into the view, and leave the accessor’s byteOffset as zero.

But they don’t strictly have to. So you need to compensate for their failure in these cases. For non-interleaved data (ie: a view is used only by one accessor), you should take sum of the byteOffsets of the accessor and the view, and use that for the offset in glVertexArrayVertexBuffers. The relativeoffset in glVertexArrayVertexFormat should be 0.

1 Like

I wouldn’t send glTF buffers to OpenGL/GPU directly. Personally I’m creating GPU buffers (OpenGL Buffer) for each BufferView and eliminate buffers.

Because, a buffer may contain unrelated data e.g. images… see

    "images": [
            "uri": "duckCM.png"
            "bufferView": 14,
            "mimeType": "image/jpeg" 

the image data is stored in a buffer and an image is using a bufferView to access it. So that bufferView may not access a standalone buffer for images, instead it may use same buffer with mesh data…

I think it is more safe to continue with creating buffers for each bufferView but I may be wrong and correct me pls. Actually safe may not be correct word because program may work but we would send redundant data to GPUs… Also after read bufferViews, an interleaved buffer for GPU may be created and used if it is needed…

If you were going to send glTF buffers to the GPU directly, and keep your loader simple, it might be easier to use something like gltfpack to format the buffers in a single consistent way beforehand. Anything else could just be unpacked before upload, if needed.

If an engine has a buffer layout that it prefers, it’s likely that you could create glTF data in that layout. But engines don’t all want the same layout, and therefore not all glTF files are going to on the “happy path” for all engines unless you optimize them in advance.