I am working on displaying DICOM medical images as well. Alot of those images are 16 bits, signed or unsigned pixels. They are using window/leveling to determine the range of pixel values in the image that is going to be sampled down to the screen. There are two values in the DICOM file called window center and window width. Those values determine the range of 16 bits values that are going to be resampled (to 8 bits or whatever the output of the display device is) for display. So, if C is the center, W is the width and the ouput values are 0-255, all pixels that fall in the interval [c-w/2, c+w/2[ are getting mapped linearly to values between 0-255, all values that are below c-w/2 are mapped to 0 and all values that are above c+w/2 are mapped to 255.
Though the center and width are parameters determined at acquisition time, they may have (they often do) to change those values when looking at images, especially when they want to see other structures (soft tissues, air, etc). So i though about doing it in a fragment shader because it would allow smooth real time operation.
As a test, i loaded the raw pixel data in a texture using glTexImage2D with LUMINANCE16 and GL_SHORT pixel type (my test image is signed). In the fragment shader, i get normalized float values so i multiply them by 32768 then i use the algorithm above to map the values (yes, to 0-1 again) so that only the right range of values will be displayed.
It worked, but when i compare the result with what i get from a reference application, it seems the image i get in my test program have slightly less contrast then the one on the other application. It’s a very very small difference, some would probably say it’s not important, but it’s there. It’s probably due to some lost of precision with the normalized float.
Fragment shader is interesting for that because it can be done real time. If you don’t use a fragment shader, you have resample the raw pixel data to 8 bits with the new center and width values every time it chages then recreate the texture to display it again. Though this isn’t a problem with CT images because they are typically 512x512, some CR images are 2200x3000 (wich is NPOT).
I was looking at VBO and PBO to do the resampling without creating a texture everytime and i saw some documentation about HDR and FBOs. So, using GL_RGBA16F would probably give me more precision … or would it not ? Those are 16 bits float, what part of it is the mantissa and what part of it is the exponent ? Depending on the answer, i may not get much more precision.
LUMINANCE16 isn’t supported with FBOs so i guess i’ll have to use GL_RGBA16F but do i use GL_RED or something like that ? Do you think this would solve my problem ?
P.S.
I’m sorry for the long post, but i thought it would help to clarify the problem a bit. There are things you have to take care of : if the image has an intercept, you have to add that value to the pixel value before you sacle it down.
You don’t want to multiply the value in you fragment shader by 65535. There is a tag in the image wich is “bit stored” that says 16 bits because each pixel is stored as a 16 bit value. However, there is a tag named “high bit” that also says 15 bits (for my CT image, it can be 12 or 10 as well for other images). This means that the highest value is 2^15 = 32768.
Regards