OpenCL on Linux, which implementation to choose?

Hi everyone,

I’m interested writing Computational Fluid Dynamics solvers with OpenCL, and I’m working on Linux. I have to get a graphic card and I’m facing a dilemma. AMD’s OpenCL implementation supports CPUs as well as GPUs, but on the other hand, I’ve heard that NVIDIA has better driver support on Linux OSes. I would appreciate any experience based advice, and I apologize for the nature of the question.


I’ve got a Linux box with two ATI Radeon HD 5970 cards (2 GPUs each). I’m running Ubuntu and it all works fine. It was initially a bit tricky to set up so that I was able to use all 4 GPUs but it works.
I’m accessing the machine remotely via ssh, which is also okay. But I need to be logged in locally (as well as remotely) before I can see the GPUs as OpenCL devices. That’s a bit annoying and I think NVIDIA has better support for remote access. Otherwise it’s fine.

If you’re interested in using the CPU as well as the GPU but prefer NVIDIA to AMD, you can install both SDKs and use NVIDIA’s for the GPU and AMD’s SDK for the CPU.

Thanks dominik.

I think I’ll opt for the ATI cards, because I will need to see the CPU, and they seem to be working fine on Linux. I don’t have enough information on the architecture to like one brand over the other, it’s just that using CPU cycles could be important, and I don’t want to start without having this option.

Switching implementations… hm… could this be a performance problem? I mean, if I have mapped my data ono the GPU memory, and I’m using different implementation (NVIDIA C-code) to operate on the host memory… this sounds like a bad thing to do. Unless the data types of the OpenCL C language are exactly the same and accessible by both implementations?

Try as root:

chmod o+rw /dev/ati/*

This basically just makes the GPU visible to anyone logged in remotely.

That doesn’t work unfortunately. The CPU is still the only device I see in OpenCL.

I run on Red Hat Linux remotely using ssh, and have to “export DISPLAY=:0”.

I have to do that for the user that’s logged in locally. But then I can only see the GPUs remotely if I log in as the same user, but not as a different user.