The clamping/wraping behavior is the real issue. The idea that one shader compiled for two different pieces of hardware will have vastly diffferent results is not a good one.
Even without precision hints you still can get different results on ATI and nVidia HW due to 24 vs 32 bit difference. Probably not “vastly” different, but I think in computations for which low precision is appropriate, you wouldn’t get vastly different results with 24 vs 16 bits either. Generally, possiblity of getting different results is not an excuse IMO, because:
- Hints are optional.
If you want best performance - then you are free to use precision hints, just use caution.
If you want most uniform result across all HW - then just don’t use the hints. Ignore them, simple?
- Hints are explicit
If programmer uses the hint explicitly, then he must know what’s he doing, doesn’t he? In such case compiler should simply assume that overflow does never occur, for any data the shader will be given. Otherwise results are undefined. If overflow can occur, then you shouldn’t have used precision-reducing hints in the first place.
(from the spec issue you quoted)
On the other hand, there is a desire to ensure that shaders are portable between different
implementations. In order to achieve portability, implementations that don’t have native support for half will be penalized because they will have to clamp intermediate calculations to the appropriate precision.
It is ill idea to require such clamping. Cg doesn’t do this. Enforcing clamping of ‘hinted’ type just for sake of portablity is like enforcing portablity of undefined results - it doesn’t make sense.
If we get 16-bit-per-channel framebuffer pixel format, should GL clamp it to 8 bits to ensure portablity? If your app relys on rounding framebuffer data to 8 bits, then you are exploiting undefined result, and your app is already not portable.
Does any shading language require nVidia to clamp its 32-bit floats to 24 bits to ensure portablity? None does. In just the same way ATI wouldn’t be required to clamp its 24-bit floats to 16 bits.
It seems like misconception resulting from perceiving the fixed/half/etc. types in strict C/C++ sense (like short vs int vs byte, float vs double) rather then taking them as mere precision hints, what they should be.
[This message has been edited by MZ (edited 08-08-2003).]