target vs name

When BufferData is called the first parameter is target - for example, ARRAY_BUFFER. BufferData copies data from the client to a buffer object associated with target. However, buffer objects also have names which are returned with a call to GenBuffers. Why doesn’t BufferData take the name of the buffer instead of target? What is the rationale for using target instead of the name?

Why doesn’t BufferData take the name of the buffer instead of target? What is the rationale for using target instead of the name?

Because that’s how OpenGL does things. The sooner you accept that this is how OpenGL works, the better.

If you really want the gory details, it’s very simple: it is the easiest way to define the feature while still allowing old code to use the functionality.

Let’s wind the clocks back to 1992. Back then, there were no objects (well, there were display lists, but let’s ignore that for now). Back then, you had GL_TEXTURE_2D, which is not an object. It is a block of state, just like any other section of state in the context. If you wanted to render an object with a texture, you called glTexImage2D(GL_TEXTURE_2D) to allocate the new image and upload pixel data to it.

That’s where display lists come in, actually. Back then, what you did was record your glTexImage2D calls into a display list (along with the relevant glTexParameter calls). When you wanted to “bind” your texture, you just executed the display list.

Now this is all powerfully stupid on a number of levels. But then, nobody’s ever accused SGI of an overabundance of intelligence. The important point is that this is the API.

Jump forward a few years. Everyone realized how stupid this was. So the started working on an extension to cut out the middle-man and get rid of the whole “display list” nonsense. They wanted textures that were real, live objects that lived on the “server”.

However, you’ve got all of these older APIs: glTexImage1D/2D. glTexParameter. All of them are defined to work on context state. They don’t have a parameter to start taking a texture object. There’s a lot of code out there written to use these functions. So you’ve got two choices:

  1. Create a bunch of new functions, thus making it rather more difficult for already-existing code to take advantage of it.

  2. Define the object completely in terms of state. If you want to change the object, you bind it and call the old functions.

SGI and friends took the latter course of action. For reasons that were somewhat legitimate at that time. All the old code worked with the new objects almost seamlessly. You simply add a bind call before your usual texture setup stuff, and now you’ve captured all of that state. From the perspective of easy porting, it doesn’t get much easier than that.

And in so doing, they ushered in the standard OpenGL Object model.

Almost every OpenGL object works this way. Why? Because in almost every case, it is the easiest way to offer the feature that will let old code work with the new stuff.

VAOs store the state set by glEnable/DisableVertexAttribArray, glVertexAttribPointer, glEnable/DisableClientState, glVertexPointer, glTexCoordPointer, and I can keep going with that, but you see my point. That’s a lot of functions to duplicate. And if you duplicate them, people have to port their code over. FBOs encapsulate glDrawBuffers and glReadBuffer. And on down the line.

Only in recent years has the ARB decided that this was powerfully stupid. 3D Labs, for all of the stupidity that they created in GLSL, did at least one thing right: they made almost all of the program getting and setting functions take program objects rather than require you to use the program. Of course, they missed the massive group of glUniform calls. Sampler Objects are completely devoid of the ability to bind and modify them; glSamplerParameter takes a sampler.

Buffer objects actually are kind of an oddball here. Why? Because there’s no backwards inertia, no existing code to support. Textures, FBOs, VAOs, all encapsulated existing state. The reason samplers and programs went a different way was that they were defining new state.

Just like buffer objects.

However, back in those days, that was simply how objects were expected to work. Even though back in those days, there were all of two object types (display lists and textures), and only textures really worked that way. Textures showed the ARB the way, and they took it.

An interesting historical footnote is ATI_vertex_array_object (not to be confused with ARB/APPLE_VAO, which is completely different). This was really the fore-runner to modern buffer objects. And it doesn’t have the bind-to-modify stuff. What’s interesting is that if you look at the ARB_VBO extension, which first defined buffer objects, you’ll see a lot of NVIDIA personnel in the contributors list. Issue number 2 is interesting too, as they specifically said:

A convention shared at the time by exactly one object.

I say that’s interesting because NVIDIA would later go on to create this gargantuan extension, who’s sole purpose is to undo the whole bind-to-modify thing. IE: something NVIDIA had a hand in standardizing. I find the irony of that to be amusing, that time once was that it was ATI who seemed to want the DSA-style API, and now it’s NVIDIA who seems to be pushing for it the most.

And on a completely unrelated yet hilarious in hindsight note, there’s this notation in the VBO spec:

For reference, here is a list of every glDraw function in OpenGL as of 4.3. That’s 22 separate calls.

The simpler days, where there were only 5 glDraw calls…