OpenCL/OpenGL interop problem with textures on NVIDIA

I’ve written a simple voxel raytracer which fills in an OpenGL texture inside of the kernel then displays it as a fullscreen quad in OpenGL. On my NVIDIA GTX 480 (latest drivers) the screen is garbage and clEnqueueAcquireGLObjects returns CL_OUT_OF_RESOURCES. On my AMD Radeon 5850 (latest drivers) it works perfectly fine. Anybody know why this is happening?


cl_int error = clEnqueueAcquireGLObjects(cmdQueue, 1, &deviceMemFramebuffer, 0, 0, 0);
if (CL_SUCCESS != error) std::cout << "clEnqueueAcquireGLObjects failed, error = " << error << "

// Launch the kernel
const size_t globalWorkSize[] = { framebufferWidth, framebufferHeight };
clEnqueueNDRangeKernel(cmdQueue, kernel, 2, 0, globalWorkSize, 0, 0, 0, 0);

// Release the GL texture.
clEnqueueReleaseGLObjects(cmdQueue, 1, &deviceMemFramebuffer, 0, 0, 0);

I’m having similar problems with CL/GL interop.
May I see how you setup your OpenGL texture and get the shared OpenCL handle?

Maybe we are able to help eachother.

Try the 275.33 driver from NVIDIA. The latest driver, 280.26, crashes for all of our demos in OpenCL Studio, which use OpenGL interop. If you can live with OpenCL 1.0 for now then try the older driver.