16-bit accuracy from 8 bit images?

I am using OpenGL ES 2.0 in a mobile application. I am wanting to be able to make a smooth gradient for a shader using just a single color channel. To make it smoother, I was hoping that there was a way to do linear interpolation across the sampling, but have the data come back with higher precision.

Here is an example fragment shader snippet:

vec4 color_map = texture2D(ColorMap, textureCoord);

So as an example if my pixels red channel was (200, 200, 201, 201, 202, 202)

Can this come back with 200.4 or a 201.6? I realize in between the pixel values of the same value, you would still have a flat stair step area, but would the values between be over 8 bit resolution? Note that you can assume I’ve turned on linear interpolation already.

The values produced by linear interpolation shouldn’t be restricted to the precision of the texture. However, ES 2 doesn’t require support for highp floats in the fragment shader, and mediump isn’t required to have more than 10 bits of precision. So while you should get better than 8-bit resolution, you won’t necessarily get much better.

Also: ES 2 doesn’t require support for 8-bit texture components; an implementation might only support 5:6:5, 5:5:5:1 or 4:4:4:4 textures.

Amazing answer! This is the exact facts I needed to know but didn’t know how to look up. Thanks so much!