Problem with clCreateFromGLTexture2D

Hello everybody,

I’m currently experimenting a little bit with OpenCL and OpenGL interoperation capabilities.

I wrote a simple raytracer for OpenCL which works without problems. Now I’m trying to modify this, so that the output is written directly to an OpenGL texture.

However, I already fail at trying to give the opengl texture to opencl via clCreateFromGLTexture2D. This methods does not succeed but instead gives an “INVALID_VALUE” error code back. I can’t figure out, so I will post the relevant code parts here in hope, that someone can give me a hint.

The code is written in C#. I use the Wrapper for OpenCL and TAO as the OpenGL Wrapper. However, I hope that the problems I experience have nothing to do with the wrapper implementations.

I initialize the OpenCL context like this:

            IntPtr curDC = wglGetCurrentDC();
            IntPtr curContext = wglGetCurrentContext();
            IntPtr[] platforms = new IntPtr[1];
            uint numPlatforms;
            OpenCL.GetPlatformIDs(1, platforms, out numPlatforms);
            Platform platform = new Platform(platforms[0]);
            var contextProperties = new IntPtr[] { (IntPtr) ContextProperties.GL_CONTEXT_KHR, curContext, 
                                                   (IntPtr) ContextProperties.WGL_HDC_KHR, curDC, 
                                                   (IntPtr) ContextProperties.PLATFORM, platform };
            OCLMan.CreateContextFromType(platform, DeviceType.GPU, contextProperties, null, IntPtr.Zero);

The created opencl context works without problems. I can use it to compile my opencl code and also run it.

But then there is the part, where I try to use CreateFromGLTexture2D:

            int texture;
            Gl.glGenTextures(1, out texture);
            Gl.glBindTexture(Gl.GL_TEXTURE_2D, texture);

            Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MIN_FILTER, Gl.GL_NEAREST);
            Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_MAG_FILTER, Gl.GL_LINEAR);
            Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_S, Gl.GL_CLAMP_TO_EDGE);
            Gl.glTexParameteri(Gl.GL_TEXTURE_2D, Gl.GL_TEXTURE_WRAP_T, Gl.GL_CLAMP_TO_EDGE);

            byte[] textureData = new byte[256*256*4];
            textureData = textureData.Select(x => (byte)50).ToArray();

            Gl.glTexImage2D(Gl.GL_TEXTURE_2D, 0, Gl.GL_RGBA, 256, 256, 0, Gl.GL_LUMINANCE, Gl.GL_UNSIGNED_BYTE, textureData);

            ErrorCode errcode_ret;
            IntPtr b = OpenCL.CreateFromGLTexture2D(OCLMan.Context, (ulong) (MemFlags.WRITE_ONLY | MemFlags.ALLOC_HOST_PTR), (int) CLGLObjectType.TEXTURE2D, 0, (uint) texture, out errcode_ret);
            if (errcode_ret != ErrorCode.SUCCESS)
                MessageBox.Show("gl -> cl error!");

This never works. I already tried to experiment with all parameters, but always the same error.

You can download the whole sourcecode here:
You only need Visual Studio 2010 to compile this, everything else is included in it.

I really hope someone can help me with this. :slight_smile:

By the way: I only tested this with several NVidia GPUs (with the newest dev drivers). Is it maybe possible, that clCreateFromGLTexture2D simply doesn’t work with NVidia OpenCL implementations?

Is it maybe possible, that clCreateFromGLTexture2D simply doesn’t work with NVidia OpenCL implementations?

That’s why we can pass CL_DEVICE_EXTENSIONS to clGetDeviceInfo. Applications should always check whether “cl_khr_gl_sharing” is in the extensions string before attempting to use CL/GL interop.

As for clCreateFromGLTexture2D, the specification says that CL_INVALID_VALUE means:

values specified in flags are not valid or if value specified in
texture_target is not one of the values specified in the description of texture_target.

Looking at your code we can see that the memory flags you pass are


However, this is what the specification says about memory flags passed to clCreateFromGLTexture2D:

flags is a bit-field that is used to specify usage information. Refer to table 5.3 for a description of flags. Only CL_MEM_READ_ONLY, CL_MEM_WRITE_ONLY and CL_MEM_READ_WRITE values specified in table 5.3 may be used

I think that solves the mistery :slight_smile:


Thank you for your fast reply.

However, as I said, I already tried to play around with the parameters. Only passing the flag (ulong) (MemFlags.WRITE_ONLY) gives the same error :frowning: .

Your point about the DeviceInfos is interesting, because my GPUs do not seem to support the “cl_khr_gl_sharing” extension.
This really surprises me. My best GPU is a NVidia GTX 275. I always thought this to be a rather modern graphic card.

Seems that I have to get rid of the idea to use OpenGL with my raytracer :frowning: .

I notice you are trying to use a GL_LUMINANCE texture. However, according to the documentation, only a few GL_RGB formats are defined in the standard. Mapping from other formats are probably driver dependent.

Texture objects created with other OpenGL internal formats may (but are not guaranteed to) have a mapping to a CL image format.
Could that be a source for your problem?

I am wondering about this, because I want to use a GL_LUMINANCE texture in OpenCL myself, but I am a bit bewildered about the lack of support within the OpenCL standard for these types of images. Especially since there is CL_LUMINANCE, CL_INTENSITY available in OpenCL as well.