Here’s the really silly part: the AMD documentation clearly states that sRGB conversion… is part of the sampler state. OpenGL (without EXT_sRGB_decode) makes this not merely part of the texture’s state, but part of the texture’s format. A change to which requires destroying the old storage and creating new (or mapping with a view).
So they already have to check the texture object and modify what they upload based on that. Though admittedly, the sampler object could just store versions for sRGB and not sRGB, and upload one or the other based on the texture.
We are not talking a big deal really; a developer can always emulate this by wrapping the GL jazz in their own crud, but that seems foolish to demand.
Or they could not do that. Generally speaking, there’s not a whole lot of need to use one texture object with lots of different sampling parameters. It is important to be able to do that, but the uses aren’t exactly legion. And even fewer are the times when you need to change swizzle masks multiple times for the same texture.
So there’s no reason to “emulate” this in any way. Just create a couple of views of the texture. One for depth, one for stencil. Or 2-3 for whatever different swizzle masks you need. You treat them as “different” textures. Outside of the depth/stencil thing, there just isn’t that much need to modify these parameters on the fly.
we are just talking checking if the current hardware texture state matches that of the sampler state and then sending those numbers through…
Well, consider how the driver makers want to implement it, for performance reasons. In D3D, these state objects are immutable, so there, they can just bake some data into memory and shove it at the card as needed. If you bind a texture or sampler, they know you intend to use it exactly as it is written, so they can send it.
In OpenGL, the objects are always mutable. So they can’t bake the data until you actually render with the object. But that’s fine; if you render with it and don’t ever change their state, they can get equivalent performance to D3D’s immutable state objects by baking the data at render time.
The problem is that, if texture and sampler objects don’t conform to the hardware, then they’ll have to modify the “baked” data every time you use them. Each pair of texture+sampler will need to reconfigure the register data. Every time. Dynamically, every frame. So you can never get equivalent performance to D3D’s immutable state objects.