I am doing GPU-based volume rendering on 16-bit signed CT data. I use glTexImage3D to upload my 3D texture to the GPU like this:
glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE16, width, height, depth, 0, GL_LUMINANCE, GL_SHORT, data);
The problem is all the negative values in the texture get clamped to zero. According to the following description in glTexImage3D documentation, this behavior is by design, but it’s not the desired behavior in my application.
Each element is a single luminance value. The GL converts it to floating point, then assembles it into an RGBA element by replicating the luminance value three times for red, green, and blue and attaching 1 for alpha. Each component is then multiplied by the signed scale factor GL_c_SCALE, added to the signed bias GL_c_BIAS, and clamped to the range [0,1] (see glPixelTransfer).
GL_NV_texture_shader extension supports signed internal formats but it only supports GL_SIGNED_LUMINANCE8_NV. There is no GL_SIGNED_LUMINANCE16_NV.
I could work around this issue by setting
and convert the values back in my shader. This works but I would lose performance (and possibly precision?).
Does anybody know a better solution to this problem? Thanks a lot.