Color Lookup Tables and Filtering

This is quite a general question, but does relate to GLSL, if tangentially.

I’m working on a color-curves plugin driven by 1D lookup-tables.

I have a Lookup-Table with a length of 256 pixels, and if I set filtering on the lookup table to Linear, I’m finding that the resulting image looks significantly darker than it should. If I set it to Nearest, it looks OK, but the number of possible values for each channel is limited by the length of the lookup table texture.

Is it generally necessary in these kind of setups to use Nearest filtering, and make sure the length of the LUT corresponds to the per-channel bit-resolution of the image? This is fine for 8bpc images, as the LUT would need to be only 256px wide. However, with a 16bit/channel image, the LUT would need to be 65536px wide- larger than most GPUs can deal with, I think.

Is there some filtering method that can be manually applied in the shader to work better with LUTs of a reasonable size, or am I trying to do something really stupid, here?


Incidentally, the lookup-tables I’d like to use are generated by a cubic spline algorithm, and will be quite ‘smooth’, so some loss of definition resulting from filtering may not be too much of an issue.


I am not sure how linear filtering could possibly look darker ?
A problem with gamma interpolation ?

What is the visual result of displaying bars with this 1D texture side by side, one with linear, other with nearest filtering (with enough magnification) ?
Darker or not ?

If linear does not suits your needs, you can sample nearest values around wanted point, and do you own custom interpolation like cubic spline.

Hi ZbuffeR,

thanks for getting back to me.

It’s weird, isn’t it? I’d have thought Linear filtering would return sub-pixel samples that are a straight linear blend of adjacent ‘real’ pixels, so I can’t see why they should end up darker, either. I’ll do a couple of screenshots so you can see the difference.


Without LUT:

With LUT, Nearest filtering:

With LUT, Linear filtering:

The lookup-table used is basically the top image- a 256px-wide linear ramp across the RGB channels.


Maybe you sample between the (black or transparent) border and the actual texture data ? Nearest would not interpolate, but linear will let border color bleed.

Can you detail the texture setup and texcoords you use ?

looks exactly like x/2, that is calculated probably like (x0+x1+0+0)/4

I was thinking that, too, but the difference isn’t just at the left and right edges of the image- it goes all the way across.

The setup is slightly complicated by the fact that it’s not actually a GLSL shader- it’s an Apple Core Image Kernel, running in Quartz Composer. It’s written in a subset of GLSL. Coordinates are in pixels (so I guess in some ways the setup is equivalent to using sampler2DRect inputs in GLSL)

kernel vec4 lookupTableTest(sampler Image, __table sampler LUT)
	// Sample input image
	vec4 pix = unpremultiply(sample(Image, samplerCoord(Image)));
	// Apply LUT
	float lutWidth = 255.0;
	vec3 outPix;
	outPix.r = sample(LUT, vec2(pix.r * lutWidth, 0.0)).r;
	outPix.g = sample(LUT, vec2(pix.g * lutWidth, 0.0)).g;
	outPix.b = sample(LUT, vec2(pix.b * lutWidth, 0.0)).b;
	return vec4(outPix, pix.a);

Exact same code used for both Linear and Nearest-filtered versions.

unpremultiply() un-premultiplies RGB channels with alpha (images in Core Image are passed around internally as premultiplied by their alpha channels, for some reason, so unpremultiplying restores original RGB values).

Dunno if this will help at all.


It does look a bit like I’m getting 0 > 0.5 with Linear filtering, you’re right.
Not sure if that’s a coincidence or not.

Could be a Core Image issue rather than a conceptual problem with the filter kernel? I’m starting to think so…


haha, I see
You mentioned 1D texture but actually using 2D, right? (addressing with vec2)
So, assuming the border is black, 2 from 4 pixels to be averaged by bilinear filter are always black :slight_smile: Exactly as I guessed initially.

So Apple Core Image Kernel is not GLSL…

sampler2DRect, not sampler1DRect ?
What do you get with :
outPix.r = sample(LUT, vec2(pix.r * lutWidth, 0.0)).r;
outPix.g = sample(LUT, vec2(pix.g * lutWidth, 0.5)).g;
outPix.b = sample(LUT, vec2(pix.b * lutWidth, 1.0)).b;
If it is greenish, go with 0.5 !

If it’s bilinear filter, it will take 2 pixels in a neighbor line anyway, resulting in a gray color. I propose one of the following:

  1. use sampler1D*
  2. do the filtering manually

Ahhhhhhhhh… OK. sooo, if I make the LUT 3 pixels high, and sample at y = 1, it should work perfectly.

And, it does!

Thank you very much DmitryM.


nice solution :), even though, using 1D texture is simpler.

It’s a subset of GLSL, with no vertex-shader, some extra bits added, and some stuff removed. Details here…gslang_ext.html
if you’re interested.

sampler2DRect, not sampler1DRect ?
What do you get with :
outPix.r = sample(LUT, vec2(pix.r * lutWidth, 0.0)).r;
outPix.g = sample(LUT, vec2(pix.g * lutWidth, 0.5)).g;
outPix.b = sample(LUT, vec2(pix.b * lutWidth, 1.0)).b;
If it is greenish, go with 0.5 !

Ah, there’s no 1D sampler in Core Image. I’ve solved it by making the LUT 3 pixels high, and sampling down the middle, at y = 1 (assuming linear filtering just filters 1px above and 1 below the current sample).


Thanks! No 1D texture type in Core Image, unfortunately. LUT is now 3x bigger than it needs to be, but it’s still not very big at 256x3, so I’m not too worried.

I guess in 1D GLSL textures, they’re only filtered on the X axis with linear filtering enabled. Unfortunately with 2D textures in Core Image, you can’t stop linear filtering from happening on both axes.


If Core Image doesn’t support 1D textures, then your solution is probably the best one.
BTW, there is no sampler1DRect in GLSL(1.4)

1 pixel high + sampling at exactly 0.5 should give the same correct result.

But do as you please as long it has a good result :slight_smile:

I’m curious why do you think sampling at 0.5 will not touch the border line in case of 1xN texture? As far as I know, HW takes 2x2 pixels as the bilinear filter kernel.

Oh, you’re right, it DOES work! Shouldn’t it be sampling at -0.5 and 1.5 in this case though (which would give the same incorrect result)? Core Image textures aren’t clamped or repeated, so any sample from outside the texture returns 0.