Have you ever took a look at how the GL performs data conversion from the format you pass in to the internal format?
I took a look at table 2.6 and it looks quite complicated. Well, it is done only when updating textures or images but I wonder if someone has an idea on how this can be made.
What’s the hard about to convert an image into other format?
The fact it can be signed or unsigned, 8, 16 or even 32 bits per channel… It may have got a packed pixel format… Got 32 bits? It may be integer or floating point data.
Have you took a look at the GL spec? I often take this thing for granted but now I had a glance at it and I must say it looks impressive.