inconsistent or missing internal formats

I’m trying to flatten the three format arguments of glTexImage (internal format, format, and type) down to a single identifier, like what D3D10 does with DXGI_FORMAT. I’ve basically taken all the DXGI_FORMAT enums and given them GL_ prefixes, and shove them in an array of structs along with the compatible sized internal format, format, and type arguments. I guess I don’t understand how to get fixed-point with the sized internals in gl 3.x.

For half/float, things are consistent:
GL_FORMAT_R16G16B16A16_FLOAT, GL_RGBA16F, GL_RGBA, GL_HALF_FLOAT

For signed ints it is consistent:
GL_FORMAT_R16G16B16A16_SINT, GL_RGB16I, GL_RGBA_INTEGER, GL_SHORT

For unsigned ints, am I allowed to continue using the _INTEGER format?
GL_FORMAT_R16G16B16A16_UINT, GL_RGB16UI, GL_RGBA_INTEGER, GL_UNSIGNED_SHORT

Now the really confusing part is fixed-point, as Table 3.12 in the 3.1 spec (page 122) is apparently missing a bunch of entries. Right now I’m trying something like this:
GL_FORMAT_R16G16B16A16_UNORM, GL_RGB16UI, GL_RGBA, GL_UNSIGNED_SHORT
The sized internal format is ushort, but the format arg is floating-point GL_RGBA. Is this conversion legal? The specification indicates that if an internal format is int, then the format must be int as well, but I may be misunderstanding.

The second paragraph of section 3.8 says “The internal data type of a texture may be SIGNED OR UNSIGNED NORMALIZED FIXED-POINT, signed or unsigned integer, or floating-point, depending on the internal format of the texture.”

The caption of table 3.12 on page 123 says “The component resolution prefix indicates the internal data type: f is floating point, i is signed integer, ui is unsigned integer, s is signed normalized FIXED-POINT, and no prefix is unsigned normalized FIXED-POINT.” In that table, there are NO entries with an s prefix, and the only ones no letter prefix are special packed types like 5551. 8888, which we’ve used forever, is not listed in table 3.12 as having either signed or unsigned normalized fixed-point internal formats.

Page 120 enumerates some _SNORM sized internal formats, and they are defined in enum.spec for 3.1, but there is no corresponding set of _UNORM enums, for unsigned fixed-point, which should be the most obvious, base-line usage of texture mappings…

I guess my question is, how do I define a non-ambiguous, complete mapping of all legal glTexImage internal format, format, and type args into a single set of identifiers like DXGI_FORMAT does?

There’s a couple misunderstandings here.

First: OpenGL will perform conversions from any format/type to the internalformat, with only a couple of exceptions (spelled out by i.e. EXT_texture_integer, ARB_depth_texture etc.)

So there is no single mapping of client format/type to something like GL_FORMAT_R16G16B16A16_FLOAT. These are all valid mappings:

GL_FORMAT_R16G16B16A16_FLOAT, GL_RGBA16F, GL_RGBA, GL_HALF_FLOAT
GL_FORMAT_R16G16B16A16_FLOAT, GL_RGBA16F, GL_RGBA, GL_FLOAT (client’s float data will be converted to half float during transfer)
GL_FORMAT_R16G16B16A16_FLOAT, GL_RGBA16F, GL_LUMINANCE, GL_HALF_FLOAT (client’s single channel will be expanded to RGBA during transfer)
GL_FORMAT_R16G16B16A16_FLOAT, GL_RGBA16F, GL_RGB, GL_UNSIGNED_BYTE (client’s data will be expanded and converted)
(many, many more…)

So, you can create a singular mapping from GL’s internalformat to an enum of your choosing. But the transfer from client to GL’s internalformat, by definition, allows for a combinatorial explosion of conversions.

Second: UNORM is unsigned normalized data. This is data which is stored as a fixed point integer (for example, a 0.8 byte) but is treated during processing as a float value (i.e. you get [0…1] when you sample it in a shader.) This is distinctly different than an “int” type, which is stored as an integer, and is sampled as an integer (on SM4 hardware.)

So:
GL_FORMAT_R16G16B16A16_UNORM, GL_RGB16UI, GL_RGBA, GL_UNSIGNED_SHORT
isn’t a valid conversion per the EXT_texture_integer spec. Client and internal integer-ness must match (but note that expansions and conversions are still allowed, as long as integer-ness is consistent.)

The “UNORM” types are the original, unsuffixed types in GL. So If you’re trying to match R16G16B16A16_UNORM, it would be:
GL_FORMAT_R16G16B16A16_UNORM, GL_RGBA16, GL_RGBA, GL_UNSIGNED_SHORT
(and by the way, there’s no guarantee that the requested internalformat will actually be used.)