Using color indexed texture

Does usage of color indexed texture, which is 1 byte per texel (instead RGB texture, which is 3 bytes) saves video card memory? Or texture is translated to RGB by OpenGL and there’s no difference?

I couldn’t make color indexed texture to work properly (RGB texture works fine). Here’s the code:


    unsigned char tex_ind[4] =
    { 0, 1, 2, 3  };

    float mr[4]= { 0.5, 0.3, 0.0, 1.0 };
    float mg[4]= { 1.0, 1.0, 0.0, 1.0 };
    float mb[4]= { 1.0, 0.0, 1.0, 1.0 };

    glPixelStorei(GL_UNPACK_ALIGNMENT, 1);

    glPixelMapfv (GL_PIXEL_MAP_I_TO_R, 4, mr);
    glPixelMapfv (GL_PIXEL_MAP_I_TO_B, 4, mb);
    glPixelMapfv (GL_PIXEL_MAP_I_TO_G, 4, mg);

    glTexImage2D(GL_TEXTURE_2D, 0, 1,
             2, 2, 0,
             GL_UNSIGNED_BYTE, tex_ind);



    glClearColor (0.0f, 0.0f, 0.0f, 0.0f);
    glClear (GL_COLOR_BUFFER_BIT);

    glLoadIdentity ();
    glColor3f (1.0, 1.0, 1.0);
    glBegin (GL_QUADS);
    glTexCoord2f (0.0, 0.0); glVertex2f (-1.0, -1.0);
    glTexCoord2f (1.0, 0.0); glVertex2f (1.0, -1.0);
    glTexCoord2f (1.0, 1.0); glVertex2f (1.0, 1.0);
    glTexCoord2f (0.0, 1.0); glVertex2f (-1.0, 1.0);
    glEnd ();

This code produces gray-scaled image, using values from I_TO_R map only, (I_TO_G, I_TO_B do not influence final image in any way), treating red as white.

What am I doing wrong?


I don’t know what you may be doing wrong, but color index textures are not supported by modern consumer cards.

I thought that too… You are on safer side when using RGBA, cards have plenty on memory anyway. You can also consider compressed textures if you worry about memory so much.

Look at your glTexImage2D. You’re asking for internal format “1” which means GL_LUMINANCE, i.e. greyscale. The pixelmap is probably working correctly.

Stop using numbers for internal formats.

If you want to use indexed internal formats, then you need EXT_paletted_texture. But modern video cards don’t support it.

If you’ve got a lot of indexed color data, then one way to store it efficiently in VRAM is to upload it as an 8 bit single channel format (LUMINANCE8 is OK) and use a fragment program or shader to sample from that texture and then use that value as an index into a separate RGBA palette texture.

This means you have to write a fragment program, so it won’t work on cards that support neither EXT_paletted_texture nor fragment programs. There are some, like the ATI Rage128 and ATI Radeon 7500.

sqrt[-1], Zengar,
My texture is HUGE. Trying to optimize. And not sure, maybe I had to run on some older Video card. How long ago compressed textures became supported? What happens if videocard doesn’t suport compressed textures?

“1” is number of components, as far as I can see from The Blue Book. In this case, I suppose, it’s one (color index).

void glTexImage2D (GLenum target, GLint level, GLint components, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid *pixels);

glTexImage2D(GL_TEXTURE_2D, 0, 1, 2, 2, 0,  GL_COLOR_INDEX, GL_UNSIGNED_BYTE, tex_ind);

Hmm, my last post looks odd. Do I have a curse to damage everything? %)

Your reference is out of date. <components> is circa OpenGL 1.0. OpenGL 1.1 and later specify the third argument as <internal format>. Check google.

I followed link at site(Documentation -> OpenGL Reference Manual). I can see now, it’s for v1.0 (

Thanks for help!

With value of “3” it works, but you were right. Texture is drawn, bur performance reduced significaly.