indexed color textures

How do I use indexed color textures? What do I send to glTexImage2D(), and how do I give it the color table?

Hi !

If you run OpenGL in color indexed mode (huhhh!) then you have already setup a color table and you have to use that.

If you run OpenGL in non color index mode and your texture is color index you have to convert the image to a non color indexed texture when you load it.

Mikael

Seems like having to convert on the fly for every texture would be a significant bottle-neck, but then lots of games use 256 color textures with OpenGL, so I guess I have no idea what I’m talking about. shrugs

you just do all the conversion at load time, once you have created the texture there is no more conversion that needs to be done

Okay, so there’s no way of sending glTexImage2D() the value GL_COLOR_INDEX for the format parameter and have it work? Because the Redbook seems to suggest otherwise here:

Recall that when you specify your texture map with glTexImage*d(), the third argument is the number of R, G, B, A components to be selected for each texel. A single selected component is interpreted as a luminance value (L); if there are two, the first is luminance, and the second is an alpha value (A). Three components form an RGB color triple (C), and four components provide an RGB triple and a value for alpha. Note that this selection is performed after the pixel-transfer function has been applied. Therefore, it makes sense, for example, to specify a texture with a GL_COLOR_INDEX image because the indices are converted to RGBA values by table lookup before they’re used to form the texture image.
For the complete text: http://fly.cc.fer.hr/~unreal/theredbook/chapter09.html

Only suggests, though, doesn’t really say anything clear or useful. Some time before that he says texture mapping only works in RGB mode, too, so he’s not talking about OpenGL in indexed color mode. I’m confused.

cough Sorry.

What the red book says in that paragraph is that the texture image is converted from indexed to non-indexed during the texture image specification. Ie it will be stored as a regular, non-indexed texture in video memory.

The table is specified via glColorTable. You can look that one up in the spec. Might look like this

struct Thingy
{
   ubyte r,g,b,a;
};

Thingy table[256];
//fill table entries
<...>

glColorTable(GL_COLOR_TABLE,GL_RGBA8,256,GL_RGBA,GL_UNSIGNED_BYTE,table);

But there’s a better way for this kind of thing. Certain hardware supports the EXT_paletted_texture and EXT_shared_texture_palette extensions.

This will keep the texture in indexed format in video memory. Example usage:

glEnable(GL_SHARED_TEXTURE_PALETTE_EXT);
glColorTableEXT(GL_SHARED_TEXTURE_PALETTE_EXT,
    GL_RGBA8,256,GL_RGBA,GL_UNSIGNED_BYTE,table);
glTexImage2D(GL_TEXTURE_2D,level,
    GL_COLOR_INDEX8_EXT,
    width,height,0,
    GL_COLOR_INDEX,GL_UNSIGNED_BYTE,
    indexed_texture_data);

The palletted texture extension would be nice, except that it is up to this point, as far as I know, not supported on any ATI cards. Most of the newer NVidia cards support it.

Damn, I use an ATI (and I’m certainly not the only one).

The reason I want to use indexed color is so I can use palette changes to reflect changes in armor and status and such for PC sprites. I assume since the texture in video memory is converted to RGBA and no longer subject to any palette, I’ll have to reconstruct the texture (call glTexImage2D) every time I change the palette. Correct?

Correct.

But if you have a small number of shades, just precompute them all, and only bind the correct one when needed.

If your hardware can use the results from one texture as coordinates for another texture, you can use that to fake it.

New question.

Originally posted by zeckensack:
[b]

glColorTable(GL_COLOR_TABLE,GL_RGBA8,256,GL_RGBA,GL_UNSIGNED_BYTE,table);

[/b]
What would be the difference between sending GL_RGBA8 and GL_RGBA for the second parameter, internalformat, if any?

And now there’s another problem, in that VC++ can’t seem to find glColorTable. It is prominently displayed in the specification, however. I suspect I need a newer OpenGL library or somesuch, though I’ll be damned if I know how to get it. Or I’m just forgetting to #include something, but that would just be too easy.

Fixed it. Found glext.h and all that. Then I had to #define GL_GLEXT_PROTOTYPES myself to get glext.h to do its job. Odd.

Originally posted by Radish:
What would be the difference between sending GL_RGBA8 and GL_RGBA for the second parameter, internalformat, if any?
You’re right, it wouldn’t make much of a difference. This was just a copy&paste artifact :wink:

Internalformat for glColorTable behaves in exactly the same way as it does for glTexImage2D. You can ask for a specific storage resolution, if you don’t want to have the “default” resolution, or if you’re unsure what that default is.

It’s just a hint, though. You may not get exactly what you want, most notably if the hardware doesn’t support the format natively. Eg requesting RGBA16 on a Geforce2MX will silently be ignored. The hardware just can’t do it, so the driver silently drops down to a supported format.

If your hardware will do fragment shaders, even if it’s only ATI_TEXT_FRAGMENT_SHADER, and you have enough texture units, you might be able to achieve a paletted texture effect with just two textures and a parameter…

Okay, nevermind, problem not solved. Even with glext.h and everything compiling fine I get “unresolved external symbol _glColorTable@24” when it tries to build. Goddammit!

Are you compiling under windows ?

By the way, new entry points defined in glext.h should be initialized at runtime, see :
(above OpenGL 1.1 functionnality, extensions)
http://opengl.org/resources/faq/getting_started.html

http://glew.sourceforge.net/