You are, quite obviously, not reading anything that was said to you in that thread.
As far as floating-point computation in your fragment shader is concerned, it’s platform-dependent. However, the 9500+ cards promise 24-bit float values per component (96-bit vectors). nVidia’s cards offer 32-bit float values per component (128-bit vectors). Either of those should be good enough for what you want.
As far as using an actual floating point frame buffer, this is not possible in the hardware. If D3D provides it, it’s doing an internal cheat that you’ll have to emulate. What you will want to do is create a floating-point render target, and then, as a separate pass, render this to the non-floating-point screen. The current hardware simply doesn’t support displaying a floating-point target directly.
Yes, there is no ARB extension for floating-point targets yet. This is more nVidia’s fault than anything else, as their floating-point target capabilities are far more limited than ATi’s. The nVidia extension only provides support for RGBA float buffers, while the ATi one provides for all kinds of textures (luminance, RGB, etc). Also, the nVidia one, if I recall correctly, only supports NV_texture_rectangle as a target, while the ATi one supports all texture targets, even 3D (though that would be destructive on memory).
In any case, supporting both isn’t too difficult, though having to use NV_texture_rectangle for the nVidia case does actually change your texture coordinates. But this only matters in the final copy stage, so it isn’t terribly important or painful.
As far as an ARB_vertex_program_2, I doubt it’ll ever see the light of day. ATi is pushing too hard for ARB_glslang for them to bother with vp2. On the plus side, if you like glslang, or the notion of having a C compiler in your driver, you’ll like what ATi gives you.