Have you given a look at the recently post on opengl.org about Texture Formats supported by NVIDIA?
I am happy to know that EXT_paletted_texture is supported starting from nv10, but i can’t understand why it is not supported on nv40!!!
Do you have any idea about this?
Maybe it will be supported by future driver releases?
What is the output of glinfo on a Geforce 6800?
EXT_paletted_texture is not supported by the GeForce 6800 hardware and support will not be added in any future driver releases.
paletted textures, sadly, are gone forever.
there was a thread a while back that discussed this:
and do a search for EXT_paletted_texture and you’ll find a few others.
gamers found it useful for texture compression and texture animation, but those of us involved with volume rendering found it IRREPLACEABLE for pre-filtered texture lookups.
i guess it consumed too many transistors though because nvidia got rid of it, and other vendors never had it.
i’m still ticked about this because i have found no easy, fast way to do a pre-filtered lookup that yields correctly interpolated values. you have to do some crazy multi-pass stuff in a custom shader to look up the colors based on pre-filtered texel values (i.e. nearest-neighbor) then do your own interpolation.
anyway, you’re out of luck, but let me know if you find a good way to duplicate the behavior!
The EXT_paletted_texture and EXT_shared_texture_palette extension specifications have documented for a while now that future NVIDIA GPUs would not support these extensions. See the Support section of:
Selected NVIDIA GPUs: NV1x (GeForce 256, GeForce2, GeForce4 MX,
GeForce4 Go, Quadro, Quadro2), NV2x (GeForce3, GeForce4 Ti,
Quadro DCC, Quadro4 XGL), and NV3x (GeForce FX 5xxxx, Quadro FX
1000/2000/3000). NV3 (Riva 128) and NV4 (TNT, TNT2) GPUs and NV4x
GPUs do NOT support this functionality (no hardware support).
Future NVIDIA GPU designs will no longer support paletted textures.
There are a couple alternatives to paletted textures depending on how/why you used paletted textures.
If you used paletted textures as a form of texture compression (8 bits per texel for a 32-bit RGBA color), you are probably better off use the ARB_texture_compression or EXT_texture_compression_s3tc extensions for better compression ratios and faster rendering performance.
If you used paletted textures as a poor man’s dependent texture lookup where you modified the color palette to control how texel indices mapped to colors, you can accomplish something similar with true dependent textures using the ARB_fragment_program or NV_fragment_program extensions. You can use Cg to write such programs in a high-level language.
If you used paletted textures as a form of color map animation, you can use multiple textures or use the glPixelMap functional to perform dynamic component remappings. This requires respecifying the texture for each color map animation change.
I hope this helps.
Thanx for the answers.
Anyway, is glPixelMap accelerated by most GPU?
mark: what, in your opinion, is the best way then to achieve pre-filter lookups? that (i believe) is the only functionality the extension offered that is not easily duplicated through other methods.
for example, say you had an 8bit luminance 3d texture representing the indexes of segmented structures.
if you wanted to map each structure to its own color and opacity, you could easily do this with a 1d dependent fetch. the problem is, since linear interpolation of the index values ocurrs BEFORE the lookup, you will end up with pixels with colors from all over the table, instead of pixels blended from just the 4 or 8 correct neighbor texels.
doing the dependent fetch with nearest neighbor interpolation will of course give the pixels the correct color and opacity, but they are not blended correctly, and in volume rendering there will be severe aliasing.
the only ways i have seen this addressed are with multi-pass shaders that do complicated multi-texturing and custom interpolation, which is expensive to say the least.