Lookup Tables in GLSL

Hi there,

This is my first post on these boards, so if i’m doing something wrong im very sorry!

I’m trying to use a lookup table in a fragment shader to change the location of pixels of a texture (eg: warping an image).

From my research it seems the best way to do this is to use a Texture as a lookup table, and then i can do something like:

uniform sampler2D texCol;
uniform sampler2D texLUT;

void main(void)

	vec4 uv = texture2D( texLUT, gl_TexCoord[0].xy );
  	vec4 col = texture2D(texCol,uv.xy);
  	gl_FragColor = col;


This seems to work well, but since the texture values are normalised between 0 and 1, i have to normalise my coordinates in my lookup table. This is leading to a degredation of quality, I can only assume because of rounding issues. (I’m using very large textures 1920 x 1080)

I’m using an NVidia Geforce 9800 gt card so judging by a couple of posts i’ve read (unfortunately i cant find the links to hand now) it looks like unfortunately i cannot use texelfetch to get my coordinates. :frowning:

I’m also working in Java and using JOGL, so theoretically should have access to texture rectangles (since the JOGL texture class stores textures as a texture rectangle by befault) but i can’t seem to get the textureRect command to work in my fragment shader…

Does anyone have any suggestions about how i can a) use unnormalised coordinates in a lookup table (the values in the table will need to be between 1920 and 1080 roughly) or b) improve the accuracy of the normalised floating point values in the lookup table.


The problem is not normalised floating point values, but 8bit per channel precision.
Try using 2 channel per axis on a RGBA8 texLUT texture, for effective 16 bits of precision, should be way better :

vec4 uvraw = texture2D( texLUT, gl_TexCoord[0].xy );
vec2 uv = vec2( uvraw.x+uvraw.y/256.0 , uvraw.z+uvraw.t/256.0)
vec4 col = texture2D(texCol,uv.xy);

Depending on how you build the RGBA8 lut, you may have to swap channels (big vs. little endian, etc).


Thank you so much for replying so quickly!

I’m afraid i’m still a little confused though, as far as i could tell, the result of texture2D() was always a vector with values clamped between 0 and 1, so wont performing


produce something really tiny?

For arguments sake say i wanted to store x = 750 y = 600 in my LUT, what would I store in each channel? And what values would this result in for my UV raw vector?

Thanks again!

Texcoords are normalized. It means for a 1920 pixel wide texture, 1 texel is only 1.0/1920. = 0.00052083333 so of course values will be tiny.

When using RGBA8 textures, each channel in a texel is storing an integer number between 0 and 255 (255 will be seen as 1.0 within the GLSL shader).

Sorry for all the probably very silly questions, this is a little tricky for a noob like me to get my head around!

Does it matter how you spit your coordinate between the channels? So again say i wanted to store the coordinate 750 across the red and blue channels, what is the optimal way to store that value? I’m guessing something like 50 in red and 50 in blue, because then you’d do

50 + 50 = 100
100 /256 = 0.3906625
750/1920 = 0.3906625

Would that make sense?


Read the description here for example : http://www.opengl.org/sdk/docs/man/xhtml/glColor.xml

Now you want to store x, a non-normalized texcoord in pixels, for a texwidth of 1920. The normalized value you want to use withing GLSL shader is xn :
xn = x/1920.0

Now, how to store xn in 2 channels which are 8bits unsigned chars (re-read the glColor description) ?
First, imagine it is 16bits unsigned integer :
xu16 = xn * 2^16 = xn * 65536

Now split it in 2 blocks of 8 bits, high part and low part.

High part is truncated to lower integer value:
xhu8 = (int) ( xu16 / 2^8 ) = xu16 / 256

Low part takes the remainder :
xlu8 = xu16 - ( xhu8 * 2^8 )

Now you cat put xhu8 in the RED part of the texture, and xlu8 in the GREEN.

In the fragment shader, uvraw.x+uvraw.y/256.0 means :
xn = uvraw.x+uvraw.y/256.0
xn = xhn + xln/256.0
xu16 = xn * 65536
xu16 = xhn * 65536 + xln * 256

with xhn = xhu8 / 256 and xln = xhu8 / 256
xhn = (int) ( xu16 / 2^8 ) / 256
-> because of the int flooring, we are loosing precision here, which is in xln but inflated by 256

I hope it was clearer, if you still struggle with the bits, integer, floating point, etc, you should learn that properly, maybe start here :

By the way, the way low and high part are added will not work with texture linear interpolation, so you should use GL_NEAREST for both min and mag texture filters.

Ah I understand now, and that worked perfectly! Amazing!

Thank you so much for all your help ZbuffeR, you’re a complete lifesaver (i’ve been tearing my hair out over this for 3 days straight now!)

Thanks again :slight_smile:

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.