clCreateFromGLTexture undef ref and kernel args


I have started to work with openCL and I have come across some weird stuff.

First of all, I am working on windows 7, 64bit, with netbeans 7.2, mingw, and the latest NVIDIA drivers for the GeForce GTX 680.

While developing with the most recent openCL in this driver (I am guessing openCL 1.2) I get an undefined reference to clCreateFromGLTexture().

So I decided to go back to 1.1 and work with clCreateFromGLTexture2D. Here I had no problem managing one texture on my openCL kernel. Now, I am trying to share 2 textures of the same size by passing the arguments as:
__kernel void mip( __read_only image2d_t front, __read_only image2d_t back, __write_only image2d_t final )

But I get an error -30 (CL_INVALID_VALUE) in clBuildProgram(program, 1, &device, NULL, NULL, NULL);

I thank you in advance for any help and, if this is a re-post, I apologize as I couldn’t find anything here about this.

the shark

as an update here’s some code:

int err = clBuildProgram(program, 1, &device, NULL, NULL, NULL);
if (err != CL_SUCCESS) {
size_t len;
char buffer[2048328];

cout << "Error: Failed to build program executable! error:"<<err << endl;
clGetProgramBuildInfo(program, device, CL_PROGRAM_BUILD_LOG,
                   sizeof(buffer), buffer, &len);

cout << "len="<<len<<endl;
 cout<< buffer<<endl;



So, len actually has the vale ´2´. is this normal? am I missing something?

That’s really strange. len should be the size of the build log. When I get errors the build log is a bit larger than the original kernel because it is interspersed with annotations from the NVIDIA compiler. Maybe it’s some help, but CL_INVALID_VALUE is only possible in these cases:

  1. if device_list is NULL and num_devices is greater than zero, or if device_list is not NULL and num_devices is zero.
  2. if pfn_notify is NULL but user_data is not NULL.

However, from your example, I don’t think the second case applies here.

Maybe, you can provide a minimal example that compiles and shows the problem?

this is my kernel:

__kernel void mip( __read_only  image2d_t front,                       
                   __write_only image2d_t final ,__read_only  image2d_t back   
	int2 coordi = (int2)( get_global_id(0), get_global_id(1) ); 
	float4 color  = read_imagef(front, my_sampler, coordi);
        float4 color2 = read_imagef(back, my_sampler, coordi);
	write_imagef( final, coordi, color2);

It compiles when I have only one of the __read_only textures in the arguments (and obviously adapted the c++ code for the argument setup.

clSetKernelArg( kernel, 0, sizeof( cl_mem), &front_texture );
  clSetKernelArg( kernel, 1, sizeof( cl_mem), &final_texture );
  clSetKernelArg( kernel, 2, sizeof( cl_mem), &back_texture );

cl_mem  list[3];
    list[0] = front_texture; 
    list[1] = final_texture;
    list[2] = back_texture;
    clEnqueueAcquireGLObjects ( queue, 3,list , 0, NULL, NULL );
    //Run OpenCL
        0, global, local,
        0, NULL, NULL

edit: btw, I have seen in some other forums that sometimes NVIDIA doesnt send the log to the buffer, or sends it as empty.
Also, I still have no idea why the clCreateFromGLTexture() function is not recognized on the linking phase.

The setup looks fine, so I think this must be a driver and or OS issue. Also, I always receive the build log on Linux with GPUs ranging from 295 to 680 and some Teslas. Answering your last question: NVIDIA does not yet support OpenCL 1.2.

EDIT: Are you flushing all OpenGL commands via glFinish() as described in the notes to clEnqueueAcquireGLObjects?

Thanks for your reply. Ok, so I need to play with openCL 1.1.

Yes, I am using glFinish() before the clEnqueueAcquireGLObjects() call.

I was thinking in an work around.

Is it possible to, instead of having
texture = clCreateFromGLTexture2D(context,CL_MEM_READ_ONLY,GL_TEXTURE_2D,0,front_tex, &err1);

have something like
float4* texture = get Buffer from GLtexture 3d () and then pass the texture as an array of float4 to the kernel?

Thank you in advance!

I think you shouldn’t prefer that. Texture and Buffer Memory are handeled different on gpu’s so your first option would just hand over a memory pointer to texture memory while your second option would copy the whole data vom texture memory to buffer memory. Or in worst case even TexMem->HostMem->BuffMem.

Or correct me f I misunderstood txture memory locations.


I was thinking the same. Its a waste of processing/reading/writing time and memory.

So how the hell can I pass two __read_only images and one __write_only textures to the kernel ?

In the meantime I will try to figure out a different way of doing it. I’ll post it here if I manage to accomplish something.


I tested different kernels and all of then simply aren’t build when they have two or more __read_only image2d_t variables on their arguments.

Why does this happen?

This must be then a hardware and/or SDK related problem. I just checked and I can put as many (probably as many as CL_CL_DEVICE_MAX_READ_IMAGE_ARGS) __read_only 2D image arguments in the kernels parameter list.

Do you have multiple GPUs in your system? I once had a similar problem but with constant arguments described here.

Thats an interesting question.

I just went over my Devce Manager, and on Display adapters I found I have an Intel® HD Graphics 4000 and NVIDIA Geforce GTX 680.

Might this be the problem? is there a way to detect which graphics card is OpenGL using and use the same device for openCL ?

–edit: just experimented with this
clGetDeviceInfo(device, CL_DEVICE_NAME, sizeof(buffer), buffer, NULL);
printf(" DEVICE_NAME = %s
", buffer);

got “GeForce GTX 680”.


I just managed to compile it. I found, over some hours surfing the web, that for the Geforce 680 you must use all the input arguments in the kernel. Unbelievable bug…

Thank you all for your help :slight_smile: