Using unsigned byte in Shader

Hello again.
I am working in GPGPU.
I have some 16bit data in my red channel on a texture with this specification:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, size, size, 0, GL_RED, GL_UNSIGNED_BYTE, Data);

This is clamped between 0-1. I want to make some calculation on it in the shader. Problem is that if i make any computations they are completely wrong because of the fact they are clamped. Or thats what i think.

I tried to use:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, size, size, 0, GL_RED, GL_FLOAT, DATA);

but this creates a stack overflow while creating the input texture. Is there a way to use that unsigned chae* as a float?

Or anyone can explain me, if on the Shader the clamped inputs can be used even if in one point their values gets bigger than 1?
And how can i take the clamped results back in normal form?

I tried to read with a texture of this type:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, size, size, 0, GL_RED, GL_FLOAT, NULL);

The values were clamped if use the input with the GL_UNSIGNED_BYTE. I tried multiplying it with 2^16-1 but the results were completely off.

In your first call
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, size, size, 0, GL_RED, GL_UNSIGNED_BYTE, Data);
your user data is unsigned byte fixed point data. It’s scaled by 1/255 during conversion to float and therefore you get values in the range [0, 1] inside your texture.
Same would be true if your texture internalformat is simply GL_RGBA8, texture lookups return values in the range [0, 1].

In your second call
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, size, size, 0, GL_RED, GL_FLOAT, DATA);
DATA must point to floating point data. If you just had it point to the same unsigned byte data you will have read 3 times more data than there is and the program most likely crashes with an invalid access violation.

If you want the texture data to contain values as 16-bit floating point in the range [0, 255] you would have to copy the unsigned byte data to float data (no scaling!) before that and then use the second upload call
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, size, size, 0, GL_RED, GL_FLOAT, DATA);

Not sure what you’re trying to achieve but there should be more elegant ways than to blow up 8-bit unsigned byte data to 64-bit RGBA16F.

Simple question: How i copy the unsigned char data to float data? Using itoa maybe?

Basically i wanted to use 16bit data as input, but if use GL_UNSIGNED_SHORT with the same other parameters iget stack overflow.

So i guess it would be more sufficient of using LUMINANCE16 for making the same job but without blowing the 64 bits of RGBA16F?

My cause is to pass a dicom image that is 16bit, which i read and is in an unsigned char*. I need to calculate some things and get them back and put them in a file.

Atm i actually read back the unsigned_byte values and wrote them in a file using itoa to convert them to power of 10 but i can see that the results are a bit different from the inputs. To test i put shader to return what the texture as it takes it and it differs.

It doesn’t make a whole lot of sense what you are doing. You have 16 bit integer data so you should ask the driver to upload this directly To VRAM instead of having the driver convert to RGBA16F which may or may not cause loss of precision.

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE16, size, size, 0, GL_LUMINANCE, GL_UNSIGNED_SHORT, Data);

The default for glPixelStorei(GL_UNPACK_ALIGNMENT, X);
is 4 and this causes problems for people so just setting it to 1 in case your texture row is not multiple of 4.
The internal format is GL_LUMINANCE16 and the external format is described by “GL_LUMINANCE, GL_UNSIGNED_SHORT” so the driver can upload the data directly to VRAM.

In your shader, each RGBA value you sample will be {luminance, luminance, luminance, 1.0}

And how would i take it back and just put it like floats into a file?

After a lot of time and efort by -nico- i finally uploaded the values in video card at this thread:

http://www.opengl.org/discussion_boards/ubbthreads.php?ubb=showflat&Number=238721#Post238721

No need to take back anything.
Convert the data yourself.
If you want to remap to 0.0 to 1.0 values,
For each pixel, just divide by the maximum representable value of 16 bit, which is 65535.

I am working on displaying DICOM medical images as well. Alot of those images are 16 bits, signed or unsigned pixels. They are using window/leveling to determine the range of pixel values in the image that is going to be sampled down to the screen. There are two values in the DICOM file called window center and window width. Those values determine the range of 16 bits values that are going to be resampled (to 8 bits or whatever the output of the display device is) for display. So, if C is the center, W is the width and the ouput values are 0-255, all pixels that fall in the interval [c-w/2, c+w/2[ are getting mapped linearly to values between 0-255, all values that are below c-w/2 are mapped to 0 and all values that are above c+w/2 are mapped to 255.

Though the center and width are parameters determined at acquisition time, they may have (they often do) to change those values when looking at images, especially when they want to see other structures (soft tissues, air, etc). So i though about doing it in a fragment shader because it would allow smooth real time operation.

As a test, i loaded the raw pixel data in a texture using glTexImage2D with LUMINANCE16 and GL_SHORT pixel type (my test image is signed). In the fragment shader, i get normalized float values so i multiply them by 32768 then i use the algorithm above to map the values (yes, to 0-1 again) so that only the right range of values will be displayed.

It worked, but when i compare the result with what i get from a reference application, it seems the image i get in my test program have slightly less contrast then the one on the other application. It’s a very very small difference, some would probably say it’s not important, but it’s there. It’s probably due to some lost of precision with the normalized float.

Fragment shader is interesting for that because it can be done real time. If you don’t use a fragment shader, you have resample the raw pixel data to 8 bits with the new center and width values every time it chages then recreate the texture to display it again. Though this isn’t a problem with CT images because they are typically 512x512, some CR images are 2200x3000 (wich is NPOT).

I was looking at VBO and PBO to do the resampling without creating a texture everytime and i saw some documentation about HDR and FBOs. So, using GL_RGBA16F would probably give me more precision … or would it not ? Those are 16 bits float, what part of it is the mantissa and what part of it is the exponent ? Depending on the answer, i may not get much more precision.

LUMINANCE16 isn’t supported with FBOs so i guess i’ll have to use GL_RGBA16F but do i use GL_RED or something like that ? Do you think this would solve my problem ?

P.S.

I’m sorry for the long post, but i thought it would help to clarify the problem a bit. There are things you have to take care of : if the image has an intercept, you have to add that value to the pixel value before you sacle it down.

You don’t want to multiply the value in you fragment shader by 65535. There is a tag in the image wich is “bit stored” that says 16 bits because each pixel is stored as a 16 bit value. However, there is a tag named “high bit” that also says 15 bits (for my CT image, it can be 12 or 10 as well for other images). This means that the highest value is 2^15 = 32768.

Regards

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.