using GL_FLOAT for texture targets

This might be a beginner question, i figured you guys would probably know more on the subject,

when calling a glTexImageXD routine with a load of floats, specifying GL_FLOAT as the data type to use, what actually happens, how does GL map the float range and use it ??

do negative values get ignored ?

Thanks

Chris

In basic OpenGL, the texture is typically converted in the driver to unsigned 8 bits precision, and clamped to the [0,1] range.

In extensions for the latest cards, you may get other behavior, if you request extended, floating-point formats.

Thanks,

im using new hardware, fx5900 or r9700 and above. if i specify RGBA16 for internal format, in a glTexImageXD operation, i guess i’ll get 16 bit precision clamped to [0,1]. Does anybody know how these values are mapped ? or should i use an extension such as float_buffer instead to guarantee behaviour ?

thanks for the help

There are two formats.

The external one, that includes both “format” and “type” and tells the driver how to cast and interpret the void * data argument.

Then there’s an internal format (like RGB8 or INTENSITY32F), and that determines what sort of type conversions and component replicating or merging need to happen.

Note also that the internal format is strictly a hint.

Thanks -
Cass