How would that module coordinate anything with the main application? If the module wants some function to be called by the main application, how does it get that to happen?
It does so by convention. The main application is coded to call some particular function, and the module’s job is to provide that function. They both agree to follow that convention: the module provides a function of a certain name, and the main application calls the function with that name.
The same goes here. The main application establishes a convention: UBO binding point 2 is where the camera and projection matrices go. The module’s job is to put the UBO it wants to use into that slot. Or it’s job is to make sure shaders that want to use those matrices use UBO binding point 2.
After all, in order for the main application and the module to coordinate, they both have to agree on the layout of that UBO, right?
If your concern is that the module might bash the state of binding points that the main application intends to use (ie: it puts something else in UBO binding point 2)… that’s fine. So long as you structure your application appropriately.
Binding points are not supposed to be where something lives indefinitely. They are ephemeral; they are meant to be there for a period of time. So you structure your application so that you don’t call a module unless you’re OK with resetting any binding points.
Take imGUI for example. This is a module that has to share OpenGL with the main application. To render the GUI, it will have to use the various binding points. So your application knows that, if it wants to do some drawing after imGUI, it needs to consider all of that state to be dirty.
But, imGUI will only affect certain binding points. It will draw to whatever framebuffer is bound to the draw framebuffer binding point at the time the imGUI render call is made. So even imGUI follows a convention.
Of course, being a GUI, it’s pretty easy to either render it all before the scene or render it all after the scene. Which means that you’re not redoing any work if you rebind the entire scene.
At the end of the day, the ability to easily share and change resources used by multiple shaders and draw calls (remember: if you want to change a commonly used resource, your way requires tracking down every shader and changing it) is more important. Also, it likely better matches the hardware. If you change programs (already a heavy-weight state change), the system doesn’t have to change all resources, even if the two shaders share many of the same resources.
Don’t pay for what you don’t use.
And literally no graphics API works the way the two of you desire. Maybe there’s a reason for that.
Even with command buffer APIs like Vulkan or D3D12, where you can execute a command buffer in the middle of recording another command buffer, the state of bound descriptors after executing the other command buffer is either undefined (for Vulkan) or inherited from the CB you executed (D3D12). In neither case does the CB automatically reset the bound descriptors back to what they were before the primary CB.
Such APIs could have had a way to automatically reinstate descriptor resources. But they don’t (well, they do, but that’s called rebinding the descriptor set. It’s still something you have to explicitly do).
What you want has costs associated with it, and not everyone wants to pay those costs.