this is taken from NVidia’s opengl performace faq:
- How can I maximize texture downloading performance?
Best RGB/RGBA texture image formats/types in order of performance:
Image Format Image Type Texture Internal Format
GL_RGB GL_UNSIGNED_SHORT_5_6_5 GL_RGB
GL_BGRA GL_UNSIGNED_SHORT_1_5_5_5_REV GL_RGBA
GL_BGRA GL_UNSIGNED_SHORT_4_4_4_4_REV GL_RGBA
GL_BGRA GL_UNSIGNED_INT_8_8_8_8_REV GL_RGBA
GL_RGBA GL_UNSIGNED_INT_8_8_8_8 GL_RGBA
(hope this shows right)
Bear in mind that the NVIDIA GPUs store all 24-bit texels in 32-bit entries, so try using the
spare alpha channel for something worthwhile, or it will just be wasted space. Moreover, 32-bit
texels can be downloaded at twice the speed of 24-bit texels. Single or dual component texture
formats such as GL_LUMINANCE, GL_ALPHA and GL_LUMINANCE_ALPHA are also very
effective, as well as space efficient, particularly when they are blended with a constant color (e.g.
grass, sky, etc.). Most importantly, always use glTexSubImage2D instead of glTexImage2D
(and glCopyTexSubImage2D instead of glCopyTexImage2D) when updating texture images.
The former call avoids any memory freeing or allocation, while the latter call may be required to
reallocate its texture buffer for the newly defined texture.
as you see, its best to use GL_RGB for 16bit images, but for 24 or 32bit, you’d better use GL_BGRA_EXT with GL_UNSIGNED_INT_8_8_8_8_REV
When i load my textures, i convert 16 bit rgb images to GL_RGB/GL_UNSIGNED_SHORT_5_6_5, 16bit rgba to GL_BGRA/ GL_UNSIGNED_SHORT_1_5_5_5_REV and 24bit or 32bit rgba to GL_BGRA/ GL_UNSIGNED_INT_8_8_8_8_REV.
So for 16bit rgb, i put one pixel in 1 short (RRRRRGGGGGGBBBBB), and so on.
I have not tested if this actaully performes better, but i believe Nvidia
Is this a good idea ? or shouldn’t i bother the trouble ?
[This message has been edited by Sven Clauw (edited 02-28-2001).]