I understand that in general GL_ARGB & GL_UNSIGNED_BYTE is the fastest for the pixel routines. I wanted to find out if the same is true for texture mapping.
On a related note, how does the internal format affect the speed of texture mapping. Does this have to be the same as the external format for optimum performance? Or can I load an ARGB image and use a RGB internal representation?
I’m using a texture mapped quad to display a bitmap and want it as fast as possible – are there any other tips to optimizing this performance?
Thanks to all.
If you’re on Windows (or any little-endian machine) chances are that GL_BGRA is the fastest format. GL_ARGB is the same, if you turn on UNPACK_SWAP_BYTES, though.
As far as texturing goes, uploading is faster if it’s in a format the card is going to like, which is likely to be GL_BGRA (on little-endian machines, at least). However, once the texture is uploaded, it’s converted to whatever the optimal format is, so the rendering performance won’t change.
Well, there is a difference between 32-bit, 16-bit and compressed textures, in that the smaller the texture is, the less texture fetch bandwidth you’re using, and thus the less fill rate it’ll use. You can set the internal format to GL_RGB(A)8, GL_RGB5(_A1) or GL_COMPRESSED_RGB(A)_S3TC_DXT(1/3/5) to specify what the card does internally.
Thanks for the quick reply and information. I actually meant GL_BGRA, but typed GL_ARGB (it’s been a long day).
So if just specify GL_RGB for the internal format, the driver should handle the rest in determining which format to use.
Any other tips for making a speedy image bitmap texture quad?