Are unsigned byte indices supported in hardware?

Hi,

Does anyone know if GL_UNSIGNED_BYTE indices are supported natively in hardware? Or are they are converted to GL_UNSIGNED_SHORT by the driver so they don’t actually result in any memory savings?

D3D 11 only supports unsigned shorts and ints. See IASetIndexBuffer. Perhaps D3D doesn’t support unsigned bytes because they are not supported in hardware, and GL supports them because historically, GL always has.

Regards,
Patrick

Hi Patrick,

I am not sure we can really speak about memory saving when we compare GL_UNSIGNED_BYTE and GL_UNSIGNED_SHORT. GL_UNSIGNED_BYTE implies that the number of vertices is really small so that even if we use GL_UNSIGNED_SHORT instead of GL_UNSIGNED_BYTE when we could, the bandwidth cost is quite insignificant.

All in all, I guess it’s converted but I don’t really know.

I knew someone was going to say that ;).

I agree the savings are usually insignificant, but I am writing something (text, not code) and want to be as precise as possible.

It is a good practice to always use the smallest datatype, but ultimately it could cost some overhead if it is converted, especially if the indices are dynamic.

Sounds like the safe bet is to go with unsigned shorts and not bother with bytes. Anyone agree/disagree?

Thanks,
Patrick

Sounds like the safe bet is to go with unsigned shorts and not bother with bytes

That’s what I would have said. :stuck_out_tongue:

Also, I don’t really agree with “It is a good practice to always use the smallest datatype”. Alignment is really important so I would add “accordingly to the appropriate alignment”.

across the range of AMD graphics card, ushort are the best option for both performance and memory saving.

How ubytes are handle on AMD graphics cards? cast to ushort or something else?

So it sounds like if someone is designing an engine, they probably shouldn’t even expose unsigned byte indices since it is unlikely that there is any performance or memory benefits. It can always be added to the engine later without breaking backwards compatibility.

I see this as different than not exposing something like double precision vertex attributes, which are supported in hardware now. I doubt there will ever be motivation to add hardware support for unsigned byte indices.

Regards,
Patrick

I’ve found as a general rule of thumb that if it’s not supported by D3D then it’s not supported in hardware. This is a result of D3D’s approach of only supporting functionality that’s on the hardware and not software emulating anything. It’s not always the case of course, but it does apply much more often than not.

Of course, there may be some hardware that does support GL_UNSIGNED_BYTE indexes (the iPhone is one possibility that springs to mind) but for general usage GL_UNSIGNED_SHORT should always be preferred.

A secondary bonus of preferring GL_UNSIGNED_SHORT (and in this case it’s vs 32-bit indexes) is that it will prevent you from going over the maximum hardware-supported index count most of the time. There is still hardware out there where this is limited, and OpenGL will fall back to software emulation if you do go over it.