Abort error when generating fractals in clEnqueueReadBuffer


I’m actually working on a project in C.
The objective is to send a char * (which contains every pixels of the final image) onto the kernel, fill the memory pointed and put the final image on the screen.

The following code does not handle errors (but it will be executed on the same PC and the “ocl->err” equals to 0 EVERY TIME except for clEnqueueReadBuffer which returns -30 and, if I add a clFinish(ocl->cmd_queue) before, ocl->err = 0 but then, the program aborts).

How it works :

I use a function which malloc a pointer on char, like img_data = generate_image_by_lib();
generate_image_by_lib will take :

  • size of image (1000 * 1000 here),
  • set->line (the adress to fill it)(int which represent the number of byte per line) (4 byte represent a pixel so in this case, his value equals 4000).
  • set->nbit (the adress to fill it)(int which represent the number of bits needed to represent a pixel color).

the set->deca_nbit value is just (set->nbit) * 8.

I would like to create a buffer with clCreateBuffer(), send the buffer onto the kernel, and empty the buffer in img_data, then put img_data with put_image_to_window();

IT WORKS but, like a said, the program aborts. or the program freeze between clEnqueueNDRangeKernel() and clEnqueueReadBuffer(). The image appears on the screen but sometimes, not the entire image.

Thx for your time and sorry for my English!


t_ocl	*init_open_cl(t_mlx *set)
	t_ocl	*ocl;

	ocl = (t_ocl *)ft_memalloc(sizeof(t_ocl));
	ocl->err = clGetPlatformIDs(1, &ocl->plateform_id, &ocl->plateform_count);
	ocl->err = clGetDeviceIDs(ocl->plateform_id, CL_DEVICE_TYPE_GPU, 1, &ocl->device, &ocl->device_count);
	ocl->properties[0] = CL_CONTEXT_PLATFORM;
	ocl->properties[1] = (cl_context_properties)ocl->plateform_id;
	ocl->properties[2] = 0;
	ocl->context = clCreateContext(ocl->properties, ocl->device_count, &ocl->device, NULL, NULL, &ocl->err);
	get_program(ocl); //function which fill a char* (ocl->src) with the content of "mandelbrot.cl" file.
	ocl->cmd_queue = clCreateCommandQueue(ocl->context, ocl->device, 0, &ocl->err);
	ocl->prog = clCreateProgramWithSource(ocl->context, 1, (const char **)&ocl->src, NULL, &ocl->err);
	ocl->err = clBuildProgram(ocl->prog, ocl->device_count, &ocl->device, NULL, NULL, NULL);
	if (ocl->err != CL_SUCCESS)
		size_t len; char buffer[2048];
		printf("Error: Failed to build program executable!
", buffer); exit(1);
	ocl->kernel = clCreateKernel(ocl->prog, "fractal", &ocl->err); //"fractal" is the name of my kernel function
	ocl->output = clCreateBuffer(ocl->context, CL_MEM_WRITE_ONLY, (1000 * set->line), NULL, &ocl->err);
	ocl->global_work_size = 1000 * set->line * 3;
	ocl->err = clSetKernelArg(ocl->kernel, 0, sizeof(int), &set->deca_nbit);
	ocl->err = clSetKernelArg(ocl->kernel, 1, sizeof(int), &set->line);
	ocl->err = clSetKernelArg(ocl->kernel, 2, sizeof(cl_mem), &ocl->output);
	ocl->err = clEnqueueNDRangeKernel(ocl->cmd_queue, ocl->kernel, 1, NULL, &ocl->global_work_size, NULL, 0, NULL, NULL);
        //ocl->err = clFinish(ocl->cmd_queue) //abort if I uncomment
	ocl->err = clEnqueueReadBuffer(ocl->cmd_queue, ocl->output, CL_TRUE, 0, (1000 * set->line), set->img_data, 0, NULL, NULL);

Is it normal that the global work size is three times larger than the buffer size?