This is quite a general question, but does relate to GLSL, if tangentially.
I’m working on a color-curves plugin driven by 1D lookup-tables.
I have a Lookup-Table with a length of 256 pixels, and if I set filtering on the lookup table to Linear, I’m finding that the resulting image looks significantly darker than it should. If I set it to Nearest, it looks OK, but the number of possible values for each channel is limited by the length of the lookup table texture.
Is it generally necessary in these kind of setups to use Nearest filtering, and make sure the length of the LUT corresponds to the per-channel bit-resolution of the image? This is fine for 8bpc images, as the LUT would need to be only 256px wide. However, with a 16bit/channel image, the LUT would need to be 65536px wide- larger than most GPUs can deal with, I think.
Is there some filtering method that can be manually applied in the shader to work better with LUTs of a reasonable size, or am I trying to do something really stupid, here?