My application renders 3D textures (8-bit, GL_LUMINANCE). I use an 8-bit palette/LUT through dependent-texture lookup, implemented using ARB_fragment_program. This works just fine on NVIDIA and ATI cards.
I recently tested on a 3DLabs Wildcat Realizm 800 with 640MB.
The performance is however pathetic on this $1,700 “ultra high-end” card. It is 4-5 times slower than GeForce 7800, and Radeon X1800. For textures larger than 512x512x512, it seems to fall back to software rendering (15-20 seconds per frame!) and sometimes locks up my PC so bad that I have to power off!
I would appreciate feedback from users who are more familiar with 3dlabs cards. Do you think the performance would improve with a different implementation of the texture-lookup? say, a GLSL fragment program. I’d rather not spend time experimenting if the prognosis is not good.