If I use the same start data and do the same calculations, will be the results equal on different devices? Of course if there’s no problems with parallelism - thread dependency etc.
If I use the same start data and do the same calculations, will be the results equal on different devices?
Not necessarily identical, but they will be very close. The reason is that different devices may support different rounding rounding modes, some will support denormalized floats, some not, etc. Also the result from math functions may be very slightly different within the error limits allowed by the specification.
Looks like more safe solution is using fixed point numbers. But if I remember, GPUs are optimized for using floats.
Unless you are doing scientific computations, the difference between devices is going to be so minimal it won’t matter. If necessary, rather than using fixed point I would modify slightly my code so that it handles very small differences gracefully.
I want to use OpenCL for AI, for multiplayer - so very small differences at the beginning can result in huge differences after a long time.
In OpenCL documentation, I found:
float - A single precision float. The float data type must conform to the IEEE 754 single precision storage format.
Looks like there is a standard for this variable - but is “must” equal “is in real”? Or “in real” is equal:
I honestly can’t understand your question. The representation of floats in OpenCL is identical to IEEE 754:2008. However, there’s some leeway in corner cases such as behavior of denormalized floats, etc. Also, different devices may have different rounding modes. If you are only concerned about desktop devices, then round-towards-even is the default and it’s guaranteed.
More complex functions like sine, cosine, etc. may vary very slightly between devices. This is given by section 7.4 of the CL 1.1. spec. Notice that the precision of sine/cosine/etc. in C or C++ is not defined at all, so OpenCL is actually going to give you more predictable accuracy.
Again, I would not worry about precision issues.