I have been working on getting textures of 16bit precision per channel recently, playing with pbuffers etc.
I was reading an nvidia document that claims the following
And best of all, NVIDIA’s 64-bit floating point texture filtering and blending
technology is implemented in hardware. There is no pixel shader encode or decode
to deal with. Furthermore, it is already exposed in Microsoft DirectX® 9.0 and
am i correct in thinking that this is exposed through NV_float_buffer, ATI_texture_float, WGL_ATI_pixel_format_float and the upcoming (released but patented) ARB_texture_float and ARB_colour_buffer_float ?