I found your description of the NVIDIA GPU and transcendental function implementations interesting and enlightening.
Why do you think you have a problem with the accuracy of the transcendental functions? I’m asking this, because only before a couple of years I had the same problem, trying to solve it with the Taylor series in shaders. Fortunately, I realized that the problem should be solved by changing the algorithm, not the accuracy.[/QUOTE]
I am implementing map projections on the GPU. The inverse and forward projections work fine on NVIDIA and AMD according to my testing. Sadly my target GPU is the Intel HD4600.
We need a very high degree of accuracy because we are displaying vector and raster maps which maybe used for navigation. When the projections go wrong it is fairly obvious. In this case we are seeing tears in the raster’s.
I have eliminated the use of sin/cos as much as I can but due to the nature of the algorithium’s there is a limit to how much I can reorganise the maths or simplify it.
NVIDIA sin/cos accuracy is to at least 6 decimal places (Quadro K620), which for the majority of the time is sufficient.
My testing implies that there are problems with parts of the sin/cos range on the Intel GPU. The program I used to test the NVIDIA GPU does not appear to work with the Intel GPU. I have had to park this investigation for now due to time pressures.
I have implemented a min/max sin/cos polynomial which is accurate to at least 6 decimal places with float and ~12 with double for the ±360 degree range which is more than sufficient for this work. I’m just concerned that this may affect performance too much when compared to hardware sin/cos.
I will just have to figure out how to do some sensible benchmarking, I’m just not too sure how at the moment.