I have an array of pixel values ranging from 0.0 -> 4095.0 stored as a float array.
When I upload a texture like this:
glTexImage3D( GL_TEXTURE_3D, 0, GL_ALPHA16F_ARB, POTwidth, POTheight, POTdepth, 0, GL_ALPHA, GL_FLOAT, pixels );
it all works as I would expect it to. My texture has values ranging from 0.0 -> 4095.0 and my shader works as expected.
However, if I have an array of unsigned short pixels ranging from 0 -> 4095 (the same values as stored in the float array) and do this:
glTexImage3D( GL_TEXTURE_3D, 0, GL_ALPHA16F_ARB, POTwidth, POTheight, POTdepth, 0, GL_ALPHA, GL_UNSIGNED_SHORT, pixels );
the results are different.
I have queried the internal format and it is reporting back the requested format (GL_ALPHA16F_ARB).
I get the feeling that it is somehow clamping the pixels in the short array. I have read through the spec for ARB_texture_float and it doesn’t mention anything about treating the uploaded pixel values differently based upon the defined type.
At the moment I am using the short array and creating a temporary float array to upload each slice individually using glTexSubImage3D, but this seems like a pretty crappy solution.
I am assuming I am doing something dumb here, but can’t find anything in the spec to point me in the right direction.
Can anybody help me out here? Feel free to yell if I am doing something insanely stupid…
Thanks in advance