I’m using OpenCL with MacOS right now (but my target is Windows with Nvidia and ATI).
I would like to use the Texture-Cache and Hardware-Linear-Interpolation to gain Performance.
Sadly the Current OpenCL-Standard is telling me, that the result of a sampler on a floating-point image
with linear-interpolation is undefined in OpenCL 1.0. I tried it with my Mac and it really isn’t working.
Does anyone know if this works on Nvidia or ATI and if this becomes part of the Standard? As far as
I know Cuda is already capable of this…
Thanks in advance!