Dynamic palette (color lut)

I’m trying to find a way to render a full screen quad with a large texture and have the pixels go through a simple transformation. The variables used for the transformation are changed dynamically, so we need to reapply the transformation every time they are changed.

I know of a few ways to do it:

  1. Apply the transformation to the texture.
    This is what we do right now, but the texture is pretty big. It takes about 50ms and after that we still have to delete the old texture and send the new one to the graphics card.

  2. Use a pixel shader.
    Can’t do it… It has to run on graphics cards that are pretty old. No extensions can be used.

  3. Use a pixel map.
    The idea is the same as what we would use with the pixel shader. Instead of updating the large texture, we change the palette (color look up table) and then render using this palette.
    Setting a pixel map only affects a few functions (glCopyPixels, glCopyTexImage1D, glCopyTexImage2D,glCopyTexSubImage1D, glCopyTexSubImage2D, glDrawPixels,glReadPixels, glTexImage1D, glTexImage2D, glTexSubImage1D,and glTexSubImage2D). Regular rendering (glBegin, glVertex, glEnd) is not affected.

The problem with this approach is that it is very slow (it’s probably not HW accelerated).

Does anyone know different ways to do it (can’t be slower than looping through the whole big texture, can’t use extensions)?

instead of deleting the texture and uploading a new one i would recommend to use glTexSubImage. it is even faster if you update the whole texture instead of “reallocating”.

if you have access to: GL_ARB_imaging
take a look - there may be some interesting features for you purpose. GL SGI texture color table for example sounds good either.

We can’t use GL_ARB_imaging, it isn’t supported by some graphics cards we need this to work with (my Intel® 82865G for instance).

I will see if glTexSubImage improves performance, but it still leaves us limited to that 50ms loop through the large texture. We are testing with a 2048x2048x16 texture now, but it may be even larger…

It looks like you want to solve an unsolvable problem, like how to calculate one mandelbrot per second on a c64 :slight_smile:

but what destination resolution do you really have? do you need to upload a 20482048 texture per frame? I have a huge 24" screen and i cant have that resolution, so for me half that image should be enough to fill the screen.
I know you need a ^2 texture for old HW, but you dont need to use more than the screen resolution, like 1280
1024 or similar, so a glTexSubImage could be a real saver here

if you can try to use some kind of SIMD when transforming the texture on CPU(like SSE, MMX,…). Depends on machine what you are running.

The problem really isn’t creating/updating the texture. It’s applying the transformation to every texel.

If we were to apply the transformation only to the pixels that will actually be drawn, we will need to do the zooming, interpolation, clipping on the CPU before that. And since we are not really trying to do anything too complicated, we might as well not use the GPU at all…

I already thought that it would require us to do the image processing on the CPU, I was just hoping someone had a solution.

It does look like an unsolvable problem (without shaders, extensions) :frowning:

Thanks for the help.

it still leaves us limited to that 50ms loop through the large texture.
It should not take you 50ms to do that. You need to invest in some assembly language code. Some MMX or SSE operations.

We are testing with a 2048x2048x16 texture now, but it may be even larger…
Wait a second. How are you even able to use a 2048x2048 texture if shaders aren’t available to you in some form? I wasn’t aware of any pre-GeForce3-quality hardware that allowed for textures of that size.

Hmmm… It works fine on my machine (Intel® 82865G).
Maybe it’s falling back to software? But rendering and updating the texture is very fast.

I don’t know… :stuck_out_tongue:

2048^2 on gf256 and gf2mx:-