Did you read the instructions? Everything you need to know is documented in
Look at the arguments you use when you give OpenGL your image:
TexImage2D(target, level, internalformat, width, height, border, format, type, pixels);
Specifically, the format, type of the data you pass in should be LUMINANCE, UNSIGNED_SHORT.
And the internalformat should be LUMINANCE16, to hint to OpenGL that you want it to store
the data on the GPU with 16 bits. Note, this is only a hint-- some hardware doesn’t support
After uploading your texture, take a look at how OpenGL actually stored it:
GetTexLevelParameteriv(target, level, TEXTURE_INTERNAL_FORMAT, &actual_format);
If the internalformat comes back as LUMINANCE16, you’re in good shape. If it comes back as
something else, like LUMINANCE8, your driver or hardware doesn’t support what you’re trying
to do. Go buy new hardware, or tell your vendor to fix their driver.
Now that your texture is on the GPU, you sample from it:
vec4 color = texture2D(myTexture, vTexCoord);
Here, the samper2D “myTexture” is converting the 16 bit luminance pixel into 32 bit float RGBA.
Assuming the texture was stored as LUMINANCE16, you haven’t lost any data, it has just been
converted to float in the range [0, 1], and splatted to RGBA according to the spec: (L, L, L, 1).
If you want it back in the range 0-65535, then do:
color *= 65535.0;
As for LUMINANCE_INTEGER_EXT, this is part of the EXT_texture_integer extension, which is only supported
on the latest round of GPUs. Here, you pass in the data as LUMINANCE_INTEGER, UNSIGNED_SHORT, and
hint an internal type of LUMINANCE16UI_EXT. Again, check the internal format to see how OpenGL
actually stored the data.
Then, to sample from the integer texture in a shader, you need the EXT_gpu_shader4 extension
to use an integer sampler:
uniform usampler2D myUnsignedIntegerTexture;
unsigned int L = texture2D(myUnsignedIntegerTexture, vTexCoord).x;
Here, the usampler2D “myUnsignedIntegerTexture” is converting the 16 bit luminance pixel into
32 bit unsigned integer RGBA (L, L, L, 1), and I stored the x element into a 32 bit unsigned int.
After that, you can use the bottom 16 bits of L however you like.
But, unless you want to do bit-wise operations on the texel (like EXT_gpu_shader4 allows), there
is no need to use integer textures or samplers. If you’re writing a shader to do typical imaging
operations like constrast/gamma/sharpen etc then regular floats are fine.