The order of their storage is given also.
No it isn’t. The implementation is free to store the binary data in whatever component ordering it wants. If it wants to store the bytes with green first, that’s a legitimate implementation. When you fetch it in the shader, the red will be the first component automatically (barring any texture swizzling, of course).
The Definition-holes that might still be there as the byte-order used internally could be closed by one or two declaratory sentences in a spec-file.
And by putting those “one or two declaratory sentences”, you’re basically saying that if their hardware works a different way, they cannot implement OpenGL. That’s a horrible idea; OpenGL should not enforce something like this when it doesn’t have to.
More importantly, my main point is that describing the storage of an individual pixel isn’t enough. There’s more to texture storage than an individual pixel. Most textures are stored swizzled, where pixels are stored such that locality is maximized. For example, if you have GL_RGBA8, that’s 4-bytes per pixel. Let’s say that a cache line is 64-bytes in size. So a single cache line fetch will read 16 pixels.
If you stored the data linearly, each cache line would access 16 horizontal pixels. However, as we know, textures are almost never accessed horizontally. A bilinear fetch from a fragment shader needs a 2x2 block of pixels. To get that from a linearly stored texture, you’d need to fetch two cache lines. However, if every cache line stored a 4x4 block of pixels, rather than a 16x1 linear array, then you would only need one cache line for a bilinear fetch. Oh sure, some will need two or four, but if you’re covering the whole face of a primitive, the number of times you’ll need more than one is greatly diminished. Also, you’ll sometimes need 4 cache line fetches for the 16x1 case two. Indeed, since you’re typically fetching a whole pixel-quad of texture samples (since fragment shaders run in 2x2 groups), you’re really needing to read a 4x4 block of pixels.
This is called “swizzling” of the texture’s storage. Rather than storing texel data linearly, it’s stored in these groups. Some swizzling is scan-like within the 4x4 block. Other swizzling will have sub-swizzles (each 2x2 block in the 4x4 is itself swizzled, and the 4 2x2 blocks in the 4x4 are swizzled). Different hardware has different standards, but virtually every piece of graphics hardware does swizzling.
A proper abstraction of textures, which OpenGL provides, allows different hardware variances on these issues. Different hardware can swizzle, or not, as it sees fit. And because the internal layout of pixels in the hardware is not exposed by the API, OpenGL is able to support any hardware via a simple black-box model. All the driver needs to do is swizzle the data the user provides from glTex(Sub)Image, and unswizzle it via glGetTexSubImage/glReadPixels.
That’s why the Intel map texture extension requires an explicit setting pre-storage creation flag to say that the texture won’t be stored swizzled. And you can’t map the texture unless you force it to be linear. So if you want to use textures as buffer object, you too would need some way to tell the implementation to not swizzle the image.
If you were unaware of all this, perhaps you should spend some time learning how things currently work before suggesting how they ought to work.
Don’t you notice yourself that questions like the one brought up at the end of your last post are simply ridicioulus?
If I had reason to think the question was ridiculous, I wouldn’t have asked it. You brought up each of those points, completely unbidden by anyone else mind you. So it’s not clear what exactly you’re talking about at any particular point.
Or more to the point, you went off-topic when you brought up “It would be nice to be able to bind the pixel-data of textures directly to some buffer”. I was just following your digression.