The layout of uniforms is fixed at program creation, so there are no string operations required at bind.
Now I’m really confused…
As I understand it, in hardware, uniforms are just a flat “array” of registers, numbered 0 through N-1. When you link a program, it assigns each uniform variable name to one or more uniform registers. So, the mat4 declared with the name “localToWorld” gets, say, uniforms 0-3.
However, a second program may declare a mat4 uniform with the same name, but because the order of stuff may be different, it gets uniforms 6-9.
When you build your custom uniform block, you say that it has a mat4 named “localToWorld”. If you bind this uniform block to both programs (as it is a shared uniform), it can’t have the “localToWorld” matrix in both hardware uniforms 0-3 and 6-9. So it seems like one of two things needs to happen.
One, you, at bind time, determine the layout of where each uniform as defined in the uniform blocks get assigned based on the program. So, you do a lot of searching. You find the mat4 that has been named “localToWorld”. This isn’t onerous, but it isn’t free either.
Two, you, at bind time, patch the program by defining the layout based on the uniform blocks. So you walk into the program and move all of the 0-3 to 6-9 or wherever the uniform blocks say that things get laid out. But if programs are stored in GPU memory, this can’t be a quick operation.
So, what exactly am I missing here that makes object binding not slower than it could be?
Binding will fail if the uniform block(s) is/are not compatible with the program.
How is compatibility defined?
BTW, as a matter of interest, how do you deal with uniforms that are structs? Since the program defines what the structs are, do you not need the linked program to build that uniform block?
Additionally, it might be a good idea to be able to, instead of just creating a default uniform object from a linked program, that it creates a mutable attribute object that would create a default uniform object. That way, you can edit the attribute object (removing shared uniforms, for example. Assuming things in an attribute can be removed) before creating the per-instance uniform block.
One last thing: format objects.
This is something I didn’t notice on my first reading, but you were talking about objects for things like image formats, right? GL_RGB8, etc? Presumably this exists so that you can ask for an available image format that corresponds to some set of parameters, rather than just say, “Give me an RGB image of some kind.”
OK, one thing after the last: display list objects.
I’m thinking that, with the concept of geometry-only display lists as well as vertex array objects, what you really want is just a “derived” class of vertex array object. An object that is totally compatible with VAOs, but they just have a different method of creation (rather than with buffers and so forth). That sounds like a really good idea.
This sounds like extension territory, though; it’s really complicated and is something that probably shouldn’t hold up the new object model.
[ edit Because I keep coming up with stuff based on the new object model ]
Something just occured to me. Because all images are alike, it is therefore possible/reasonable to take a “renderbuffer” (an image created from a format that, I guess, suggests being a render target as its primary function?) and bind it as a texture to a sampler? Will there be combinations of these bindings that don’t work, whether binding a depth sampler to a non-depth texture or just the wrong format to an image?