Question about texture using glsl

hi, all. i have a quesion about glsl. i have got a grayscale medical image whose pixel format is unsigned short. now i want to access the pixel value in the frag shader. but using the follow code snippet:

uniform sampler2D myTexture;
varying vec2 vTexCoord;
vec4 color = texture2D(myTexture, vTexCoord);

i found that the value is clamped to 0.0 ~ 1.0(because of the
sampler), not from 0 ~ 65535, which mean a huge information lost…

Is there any method that i can get the exact intensity of the pixel?

and i also heard that there is a opengl extension called
“GL_LUMINANCE_INTEGER_EXT”, now i try to adjust my code to:
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16UI_EXT, pack->width,
pack->height, 0, GL_LUMINANCE_INTEGER_EXT, GL_UNSIGNED_SHORT, pack-
>texture);

but i found this time, all the content i fetched from texture2D is zero…

what should i do???

thx in advance and any suggestion would be apprciated~~

elvis

Did you read the instructions? Everything you need to know is documented in
the specifications.

Look at the arguments you use when you give OpenGL your image:
TexImage2D(target, level, internalformat, width, height, border, format, type, pixels);

Specifically, the format, type of the data you pass in should be LUMINANCE, UNSIGNED_SHORT.
And the internalformat should be LUMINANCE16, to hint to OpenGL that you want it to store
the data on the GPU with 16 bits. Note, this is only a hint-- some hardware doesn’t support
LUMINANCE16.

After uploading your texture, take a look at how OpenGL actually stored it:
GetTexLevelParameteriv(target, level, TEXTURE_INTERNAL_FORMAT, &actual_format);

If the internalformat comes back as LUMINANCE16, you’re in good shape. If it comes back as
something else, like LUMINANCE8, your driver or hardware doesn’t support what you’re trying
to do. Go buy new hardware, or tell your vendor to fix their driver.

Now that your texture is on the GPU, you sample from it:
vec4 color = texture2D(myTexture, vTexCoord);

Here, the samper2D “myTexture” is converting the 16 bit luminance pixel into 32 bit float RGBA.
Assuming the texture was stored as LUMINANCE16, you haven’t lost any data, it has just been
converted to float in the range [0, 1], and splatted to RGBA according to the spec: (L, L, L, 1).
If you want it back in the range 0-65535, then do:
color *= 65535.0;

As for LUMINANCE_INTEGER_EXT, this is part of the EXT_texture_integer extension, which is only supported
on the latest round of GPUs. Here, you pass in the data as LUMINANCE_INTEGER, UNSIGNED_SHORT, and
hint an internal type of LUMINANCE16UI_EXT. Again, check the internal format to see how OpenGL
actually stored the data.

Then, to sample from the integer texture in a shader, you need the EXT_gpu_shader4 extension
to use an integer sampler:
uniform usampler2D myUnsignedIntegerTexture;
unsigned int L = texture2D(myUnsignedIntegerTexture, vTexCoord).x;

Here, the usampler2D “myUnsignedIntegerTexture” is converting the 16 bit luminance pixel into
32 bit unsigned integer RGBA (L, L, L, 1), and I stored the x element into a 32 bit unsigned int.
After that, you can use the bottom 16 bits of L however you like.

But, unless you want to do bit-wise operations on the texel (like EXT_gpu_shader4 allows), there
is no need to use integer textures or samplers. If you’re writing a shader to do typical imaging
operations like constrast/gamma/sharpen etc then regular floats are fine.

i found that the value is clamped to 0.0 ~ 1.0(because of the sampler), not from 0 ~ 65535, which mean a huge information lost…

No, it does not. Integer values, unless you are using special integer textures, are converted to a [0.0, 1.0] range when accessed from a texture unit. That means that (if you’re getting a 16-bit-per-channel luminance format) a value of 65535 will become 1.0, 32768 will be 0.5, and 0 will be 0.0. All of the data is still there, but it has been converted into floating point.

Just convert it back into integers by multiplying by 65535. Though one wonders why you need them as integers.

Did you read the instructions? Everything you need to know is documented in the specifications.

1: You don’t need to put a carriage return at the end of your lines in your posts. Your web-browser will word wrap for you.

2: Specifications are not documentation. Specifications are highly technical, arcane, and obtuse documents. Expecting people to “learn from the spec” is both unhelpful and foolish.

Ok, I shoved this into the Wiki
http://www.opengl.org/wiki/index.php/GL_EXT_texture_integer

Hi,

i have a small question about integer textures. I want to use them to store integer values within my memory.

I use GL_LUMINANCE16UI_EXT which works fine on my card.
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16UI_EXT, checkImageWidth, checkImageHeight, 0, GL_LUMINANCE_INTEGER_EXT, GL_UNSIGNED_SHORT, shortArray);

But i do only get values if i generate midmaps:
glGenerateMipmapEXT(GL_TEXTURE_2D);

Is there a way to use integer textures without calling glGenerateMipmapEXT(GL_TEXTURE_2D);?

Thanks,

Jotschi

Is there a way to use integer textures without calling glGenerateMipmapEXT(GL_TEXTURE_2D);?

What happens if you make the textures non-mipmaped? That is, you set the min-filtering to not do mipmap interpolation and set the mip parameters in the texture to just use the high mip.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.