Is glBindBufferRange's index really a (per context) global constant?

For a while now I’ve wondered if I’ve missed some subtlety since OpenGL is full of twists and turns. It seems like the index assigned to uniform block interfaces is simply a fixed constant with no reference to anything else.

It seems to me this is a psychotic design. Everything else uses handles. This makes it pretty much impossible to coordinate these constants across APIs unless they all agree on how to manage them or expressly have an API to pass ranges they can work with to not step on each other’s toes. Is this really how this works, or am I missing something basic?

The text of your question title doesn’t seem to match the text of the actual body of the question.

To answer your question title, glBindBufferRange binds a buffer to an index in an indexed binding point in the current context. It cannot affect the state of any other context. So, is the index a “per-context global constant”?

What does that mean?

If you bind a buffer to an indexed binding point, then you did that because that index is being referenced by some shader process. For example, if you bound the buffer to the GL_UNIFORM binding point, then that means you have one or more programs somewhere that have a uniform block whose binding was set to that index.

Does that make it a “constant”? I don’t know; that depends on what you’re doing with it.

But it is per-context, since this is a context binding operation, not an object modification operation. And it is global, in the sense that all context state is “global”.

What do you mean by “everything else”? The UBO resource management API (the way you associate a uniform block in a shader with an actual buffer object) is functionally identical to how you do it with textures.

With textures, you bind a texture to a particular texture unit’s index. You make sure that the program(s) that want to use that texture associate the texture variable in the shader with that texture unit index. The shader’s index must match the index you used to bind the texture. You can bind a different texture to that texture unit, and the next time you use that shader, it will read the new texture. You can change the index value in the shader, and it will use whatever texture is bound to that texture unit.

UBOs work the same way. It uses different words, but the structure is identical.

You bind the buffer to the GL_UNIFORM indexed binding point at a particular index. You then assign the binding index for a uniform block in your shader to use that particular index.

1 Like

A per-context global constant, is like the number 1. As opposed to glGenX. I know that it would be further trouble to use a runtime value in a shader. But it would be worth the extra work to make them interoperable or there could be a named based binding gl API that could accomplish it. (Edited: I think my understanding is there is such an API but it may not work for shared buffers. I.e. layout(binding=0,std140).)

It’s not cross-context because the objects aren’t cross context. I guess I just wanted to be clear that these aren’t strictly global slots on a GPU or anything. So they could be remapped in theory if the API was different than I understand it. (Hence parentheses.)

This helps me to understand the design some, but intuitively it seems like resetting these slots the same way you would set textures might not perform well. At least intuitively I think of them as “set it and forget it” (to encourage interoperation) more than textures, where historically there were limited units and I don’t know… I don’t know if many people try to set lots of textures up in high indices and manage that in shaders.

I think your answer confirms my view that I understand it correctly. I just think it would be more sane if OpenGL provided a built-in way for modules to collaborate without trashing each other (we expect texture slots to be trashed. Edited: maybe it would even be good to have names for texture slots if it made it scale better and reduced API calls.)

It’s no different to how shader programs access anything else: uniform blocks, SSBOs, textures, images, default-block uniforms, attributes. In the code for the program object, they’re all indices (small, non-negative integers). The binary format for shader objects gives them names so that the association between name and index can be controlled by the client prior to linking, or queried after linking. Well, textures and images don’t have names, the others do.

And which of those numbers do you think comes from a glGen* function? It’s none of them.

Vulkan, D3D12, and Metal all had the opportunity to have a resource API that works like that by attaching the resource (image or buffer) directly to the shader. And yet, none do this. They use an intermediary construct, whether binding points in the context or descriptors that represent groups of bound resources.

Maybe there’s a reason none of them do it that way.

The closest you get to what you’re talking about is bindless texturing, and even that doesn’t use OpenGL object names. You also don’t attach the texture to the shader; you instead pass a system generated 64-bit number through an existing interface that represents that texture.

Nobody is making you reset anything. You can leave the same buffer bound to the same indexed binding point forever, as far as the API is concerned.

If you want a buffer that contains, for example, the camera and perspective matrices for all objects in the scene, you can just pick a binding point and bind it there. All of your shaders can set their uniform blocks for that data to use that binding point. You can even establish conventions as to which binding points mean what (0 could be for scene data, 1 for per-object data, 2 for skinning matrices, etc).

Also, if you do it this way, and need to change the common scene buffer (maybe you’re using a different set of matrices but you don’t want to modify data while it is in use), you do not have to go to every shader to change what buffer to use. You just bind a new buffer region to the same index in the context. That’s less buffer binding than your way.

The API is what you choose to make of it. It’s a set of tools; how you use them is up to you.

I’m not trying to argue device API strategies… does the OpenGL forum not have a “Solved” system installed?

Anyway, I do disagree because when the mass constants system was introduced it’s to be able to share variables. Before then I don’t know if OpenGL would let you keep constants (uniforms) in named registers like D3D or not. Where using fixed values (literals) breaks down is when you’re not in control of everything that happens on the context… and even if you were, it would be a headache that could be managed by OpenGL like it does with “names”. In theory you don’t need “names” for anything either, and could just roll your own name generators and pass them to OpenGL (I assume that doesn’t work) but there was a historical reason for it being there… and it doesn’t hold up for abstract uniform buffer blocks IMO. They’re more heavy weight and unlike textures it’s hard to keep track of them in your projects because they’re so multipurpose and non-local, especially when large groups of shaders overlap.

Anyway, the reason I asked this question is I wasn’t sure I fully understood the organization around this feature. Thank you Alfonse, et al.

I think you should have the option of using a preprocessor pattern or other method to set the binding number and that modular libraries would respect that to avoid collision, and apps that consume modular libraries would do likewise, and if OpenGL enabled this, all the modules and apps wouldn’t have to pre-agree on some third-party mechanism. (The same system could be assigned to texture units, etc… apps might even find that helpful for managing material libraries, but I don’t know about performance in that case.)

I’m continuing the debate because I don’t want to give anyone the impression that your arguments represent how OpenGL works or how people use OpenGL. Because they don’t.

This is what sharing UBOs looks like in OpenGL:

glBindBufferRange(GL_UNIFORM_BUFFER, mySharedIndex, 0, byteSize, buffer);

glUseProgram(prog1);
glDrawElements(...);

glUseProgram(prog2);
glDrawElements(...);

Where is the problem with sharing variables here? I only had to do one bind. Both of the shaders had been set to use mySharedIndex (whether within the shader with layout(binding) or outside of it by querying a specific UBO name and assigning its binding).

So what is the problem? And, equally importantly, what would your preferred API look like?

When exactly would you not be in control of what happens in the context? How are you able to ensure anything that happens if you’re not in control of it?

Again, I remind you that every API works this way, to one degree or another. They are all based on some kind of “fixed values” defined at compile-time in the shader, even if those values aren’t always numbers.

Changing UBO state is less heavyweight than changing texture state.

Nobody else seems to be having this problem.

None of that makes any sense.

First of all… you can change the binding for any of the resources of a shader. All you have to do is isolate the uniform buffer name with the introspection API and set its binding. There’s no need for a preprocessor if you want to set the binding at “runtime” rather than within the shader.

Secondly, think about what you’re saying.

In order to set the binding from outside of the shader (whether it’s a preprocessor thing or OpenGL’s mechanism or SPIR-V specialization constants or anything else), you have to impose some kind of expectation on the shader. The shader must be written so that a particular name is used, and your code outside of the shader gives that name meaning.

Maybe you decide that the uniform block named Perspective is where the perspective matrix UBO should go, and at runtime, you decide that the UBO binding index 3 is where Perspective should go.

But that means there is still a fixed, compile-time thing in the shader text that tells the outside code how to interact with it. So if you have a problem with something in the shader being fixed at compile-time… that’s just how it has to be. The external code has to have something to key off of in order to associate a runtime resource with the consumer of that resource in the shader text.

Simple, any OpenGL code implemented in a module. How does it coordinate? It draws to an FBO or something like this using the current context. Would it make sense to have multiple contexts do this? Maybe, but often OpenGL code assumes the current context because context set up is complicated.

This would solve the problem if instead of setting the binding, it set the program to use the UBO by its name, and not its “unit” as you put it.

Someone put a “like” on the OP. Maybe it was a bot. But it seems like a serious coordination problem to me that people will run into often. The only clean solution is to push/pop the entire global UBO state, which might not be known, and might be very large, or just something you’d like to “set and forget”. (Because it’s not a localized thing like textures more often are.)

I already explained this, if app writers wanted a more traditional arrangement for these objects, they would allocate indices as “names” (glGenX) and then either bind those name with OpenGL APIs or use #define to set the binding in their programs. Otherwise there’s no way to coordinate and you push/pop state, which is not ideal…

(EDITED: Again, yes you can roll “GenX” yourself, but it would need to be a common enough practice that all modules (support code) is aware of the third-party name pool. If OpenGL provided this, then they all would know and it wouldn’t be a communication problem.)

How would that module coordinate anything with the main application? If the module wants some function to be called by the main application, how does it get that to happen?

It does so by convention. The main application is coded to call some particular function, and the module’s job is to provide that function. They both agree to follow that convention: the module provides a function of a certain name, and the main application calls the function with that name.

The same goes here. The main application establishes a convention: UBO binding point 2 is where the camera and projection matrices go. The module’s job is to put the UBO it wants to use into that slot. Or it’s job is to make sure shaders that want to use those matrices use UBO binding point 2.

After all, in order for the main application and the module to coordinate, they both have to agree on the layout of that UBO, right?

If your concern is that the module might bash the state of binding points that the main application intends to use (ie: it puts something else in UBO binding point 2)… that’s fine. So long as you structure your application appropriately.

Binding points are not supposed to be where something lives indefinitely. They are ephemeral; they are meant to be there for a period of time. So you structure your application so that you don’t call a module unless you’re OK with resetting any binding points.

Take imGUI for example. This is a module that has to share OpenGL with the main application. To render the GUI, it will have to use the various binding points. So your application knows that, if it wants to do some drawing after imGUI, it needs to consider all of that state to be dirty.

But, imGUI will only affect certain binding points. It will draw to whatever framebuffer is bound to the draw framebuffer binding point at the time the imGUI render call is made. So even imGUI follows a convention.

Of course, being a GUI, it’s pretty easy to either render it all before the scene or render it all after the scene. Which means that you’re not redoing any work if you rebind the entire scene.

At the end of the day, the ability to easily share and change resources used by multiple shaders and draw calls (remember: if you want to change a commonly used resource, your way requires tracking down every shader and changing it) is more important. Also, it likely better matches the hardware. If you change programs (already a heavy-weight state change), the system doesn’t have to change all resources, even if the two shaders share many of the same resources.

Don’t pay for what you don’t use.

And literally no graphics API works the way the two of you desire. Maybe there’s a reason for that.

Even with command buffer APIs like Vulkan or D3D12, where you can execute a command buffer in the middle of recording another command buffer, the state of bound descriptors after executing the other command buffer is either undefined (for Vulkan) or inherited from the CB you executed (D3D12). In neither case does the CB automatically reset the bound descriptors back to what they were before the primary CB.

Such APIs could have had a way to automatically reinstate descriptor resources. But they don’t (well, they do, but that’s called rebinding the descriptor set. It’s still something you have to explicitly do).

What you want has costs associated with it, and not everyone wants to pay those costs.

Just to be clear, I’m talking about a pure software solution to a simple problem. I.e. the generated names would be generated on CPU… what apps do with the names would be on CPU. It’s not a tall order to introduce a modicum of sanity.

That doesn’t change the fact that no API does what you want. Vulkan and D3D12 were both designed in consultation with actual working graphics programmers who build major graphical applications. If these APIs don’t have this feature, it is because actual working graphics programmers do not see it as desirable.

If you want to write such a mapping table, that’s up to you. But it shouldn’t be part of the graphics API.

I’m not trying to butt heads, but by your same logic (like I already said) OpenGL could’ve been designed without any glGen mapping tables, but it wasn’t. It’s an arbitrary decision that in saner times held sway. (Because if glGen APIs didn’t exist this problem would’ve existed for every name in OpenGL. That would be just as fun. And we’d all have to " write such a mapping table" for everything. Tower of Babel.)

It was an “arbitrary decision”. An “arbitrary decision” that was recognized by the ARB (and basically everyone else) as being bad. Those weren’t “saner times”; they were “older times”, in which performance characteristics of CPUs were different.

The OpenGL object model with its name-to-object mapping is just uniformly terrible from a performance perspective. It originally existed to allow users to just decide that “3” is a valid object name and force the implementation to go along with it.

And back when the CPU was having to do transform-and-lighting stuff, back when a scene might have 200 textures maybe, that was fine. The time to load data from RAM to the cache wasn’t that huge relative to CPU clock cycles, so the cost of doing a mapping lookup wasn’t so bad.

By 2005, it wasn’t fine anymore; it was a performance problem (on top of being a huge implementation headache). The GPU was doing more stuff, and CPU performance was massively outpacing memory performance. So having the CPU “pre-chew” the data the user is setting via this mapping table is helpful to basically nobody. It was making people who didn’t need it pay the costs for it, while offering a benefit to a vanishingly-few users.

The Longs Peak revision of OpenGL, a failed attempt at a near-total API rewrite, would have ditched the “mapping table” API altogether. So it’s clear that people who care about performance saw this as a problem.

And this is why OpenGL 3.2 ditched at least the ability for the user to specify names, relegating it to compatibility-only. They couldn’t ditch the whole object name API entirely, but at least they allowed the implementation to ensure that you couldn’t just invent a name.

… OK, it seems abundantly clear that you and I are talking past each other. Either we’re not talking about the same thing, or we have profoundly different ideas about how OpenGL is used in actual applications.

What would it mean for glGen* to “not exist?” Well, the API would still need objects, things that represent GPU constructs and/or resources. So you need some kind of object model to interact with them.

The obvious alternative is to deal in pointers (like all other sane APIs). That is, to create a texture you call glCreateTexture which returns a Texture*, where Texture is an opaque type. Functions that manipulate textures would presumably take a Texture* as a parameter. When it comes time to use the texture, you would call glBindTexture with the Texture* and the texture unit you’re binding it to.

So… why would this force anyone to write a mapping table? What would that mapping table map between?

It would just treat everything as a “slot” or “unit” or “binding” or “index” or “register” like these APIs do. (It would say you can have some arbitrary number of vertex-buffers-objects, etc. and you pick your own numbers. It would be chaos.)

EDITED: If you mean why would pointers require this? In theory if they’re pointers on the GPU it wouldn’t, but otherwise the pointers wouldn’t require a mapping table if they were also the binding points in the shader. Then you wouldn’t have any texture units, or UBO interface units, etc. (Although internally if they don’t map to the GPU the gl API would have to map them. But it wouldn’t be the user’s problem then.)

OK, I think I’m starting to understand. You’re calling the context’s binding points a “mapping table”. Which is fair, I guess, but eliminating that “table” is not terribly helpful since that’s how the actual hardware works. The actual GPU shader programs do not store resources (well, they sometimes can ); they reference them through the context.

The “mapping table” is hardware.

Earlier in the thread, I gave you an example of how the current API enables the sharing of buffer objects:

glBindBufferRange(GL_UNIFORM_BUFFER, mySharedIndex, 0, byteSize, buffer);

glUseProgram(prog1);
glDrawElements(...);

glUseProgram(prog2);
glDrawElements(...);

You want this:

glAttachBufferRangeToProgram(prog1, mySharedIndex, 0, byteSize, buffer);

glUseProgram(prog1);
glDrawElements(...);

glAttachBufferRangeToProgram(prog2, mySharedIndex, 0, byteSize, buffer);

glUseProgram(prog2);
glDrawElements(...);

Why is this better? How is what you are suggesting in any way more efficient than the way it currently works? Resources as they currently exist are outside of the shader. This means binding a shader does not also bind resources.

This is a good thing, as two shaders can be made to work with the same (or mostly the same) resources. That’s good for performance. If you need to change what the shared resource is, you can just change it in one place; you don’t have to track down every program that uses it.

Even if you start using bindless textures, the texture handles don’t have to be stored in the program itself. They can be stored in UBOs, SSBOs, or even vertex data as numbers fetched by the shader logic.

Well, if I have to pick between those I want this (the first one) but just with a guarantee (via anything… glGenX would be best because everyone would use it reflexively) that opaque modules don’t also choose mySharedIndex and so clobber my setting.

EDITED: It’s not even the clobbering that’s the problem… it’s the fact that it’s a nightmare for modules to coordinate this detail. It puts back-breaking work on developers to reinvent the wheel and hope for the best, or at worst use black magic like guessing what indices are free in their system.

This is the part that confuses me in this discussion. I don’t know what you mean when you talk about making glGenX do something. Generating an object name and binding that object have nothing to do with one another. These are fundamentally separate processes.

Like, when you create an object, do you want to directly associate it with a specific binding point, such that it can never be used with any others or something? That’s a terrible idea because buffers especially can be bound to multiple binding points. You may change the binding range for a bound buffer even if you’re using the same buffer. This is very common for double-buffering data to avoid synchronization overhead.

It’s a calling convention. You and any modules you use agree on what convention to use, and you stick with it.

As I mentioned above, imGUI has the convention that, by calling it to render the GUI, you are agreeing that any and all state is fair game for it to modify… except for the bound framebuffer and viewport. That’s not a “nightmare”; that’s just what the module you’ve chosen to use imposes on you. If you have something you need to preserve across this call, it’s on you to preserve it. And if you don’t… then there’s no problem at all.

More constrained, specific modules will have finer-grained calling conventions. But this problem is best solved by having a clear convention that all parties adopt.

I’m sorry you can’t understand what I mean. I’m mostly being polite in my replies. All I’m saying is (it seems to me) is it would be nice if there was a pool of shared indices that we can allocate from to not care what binding points we get, in order to ensure everyone isn’t just using 0 1 2 3 4. There’s a lot of binding points available to these block interfaces, so it should be possible for every module to have its own “reserved parking” so that no one has to push/pop these settings, if only because it’s a true pain point and extra code to maintain. Just having to stop your work to think about what to do about this dilemma is a travesty compared to any other part of OpenGL I’m aware of. (Edited: It doesn’t even have to be a “module” it could be some parts of a team in some part of engine isn’t talking to another part, etc.)