The spec can be extended to address that and endiannes shouldn’t be a problem as hardware I know supports bit swapping for both pixel reads and writes. The internal format shouldn’t matter since it’s internal.
My point is that it breaks the abstraction and limits what IHVs can actually do with their hardware.
The whole point of pretending that a DEPTH_STENCIL texture is an RGBA8 texture is to make accessing it faster. If performance didn’t matter, you would simply copy the texel data yourself. Therefore, you would only get a performance gain if the hardware were capable of using one internal format as another.
Do you know if all hardware can do that for all formats? Would there be format combination that it can’t do it for in a performant manor?
No; it’s simply not worthwhile to bother with for something that should have been specified correctly (getting stencil data in a shader) to begin with.
Usually you want the textures to be tiled for better cache utilization but they’re specified in a linear form in OpenGL. So basically since hardware supports both linear and tiled textures, a driver developer can use the “blit” operation (implemented using a textured quad and a pass-through vertex and pixel shader) to copy between linear and tiled textures. This blit is part of any texture transfer in OpenGL, i.e. TexImage, GetTexImage, ReadPixels and maybe even DrawPixels, and in my opinion this is the reason the textures must be copied into PBOs (which is effectively the blit) instead of obtaining the buffer backing the texture directly.
The reason that you can’t map a texture is because the format of that texture, the memory layout and arrangement of pixels, is implementation dependent. Is it tiled? Would it be not tile for cubemaps or rectangle textures? What tiling scheme is used? Implementations are only free to choose these things if the user can’t directly access the data.
Buffer objects are unformatted memory; as such, the user can directly access and modify them. OpenGL guarantees that the bytes you set will still be set exactly as you set them, until you perform some operation that changes these bytes.
And I seriously doubt they’re doing a blit operation that involves actual shaders. A simple DMA that swizzles the texture in transit would work just fine and also not interfere with current rendering.