Striped Image problem


I trying a simple gray scale to image but the result is a striped image( applying voluntarily at the bottom only ):

Kernel :

	"__kernel void copy(__read_only image2d_t imageIn,__write_only image2d_t imageOut)",
		"int gid0 = get_global_id(0);",
		"int gid1 = get_global_id(1);",
		"uint4 pixel;",


		"if(gid1 < 50 )",
			"pixel = (uint4)(0.299*pixel.x + 0.587*pixel.y + 0.114*pixel.z);",
//			"pixel = (uint4)(0.6*pixel.x + 0.3*pixel.y + 0.1*pixel.z);",
//			"pixel = (uint4)pixel/2;",


when I have :

pixel = (uint4)pixel/2; //work

-> I can see that : = / 2

idem for :
pixel = (uint4)((pixel.x + pixel.y + pixel.z)/3) //work

pixel = (uint4)(0.6pixel.x + 0.3pixel.y + 0.1*pixel.z) //don’t work correctly, striped

So, is somebody had the same problem, and how can I resolve it?
What I’m really want is understand how it works.


I am puzzled as well :lol:

Could this have something to do with using integer coefficients vs. using floats? In your examples, using integers always works and using floats always fails, so I wonder if there are some weird type promotion rules at work.

I hope for my career that it’s not only a cast problem. :slight_smile:
Excuse my lack of knowledge.

I just tried :

pixel = (uint4)((0.3*pixel.x + 0.3*pixel.y + 0.3*pixel.z)/0.9);

, and grayscale works well, but for exemple :

pixel = (uint4)((0.4*pixel.x + 0.3*pixel.y + 0.2*pixel.z)/0.9);

is striped.

I tried to work with float4 pixel, read_imagef(…) , but kernel is not executed.

It’s seems like pixel.x, pixel.y, pixel.z is columns of image.

Am I wrong?

(Can’t edit previous post)

P.S. : Also see :
, but it’s not really the solution to convert to 32bpp, and I tried but have problem to save image to 32bpp with gimp.

I feel like I’m missing something obvious…

Anyway, have you tried this?

pixel = (uint4)((4*pixel.x + 3*pixel.y + 2*pixel.z)/9);

What about this?

pixel = (4*pixel.x + 3*pixel.y + 2*pixel.z)/9;

Both codes give the same result (with (4,3,2) coefficients , and not (0.4,0.3,0.2)) :

P.S. : I haven’t debugger for kernel, but i can test it by changing “if” condition
if(gid < 50){…} <-> if(gid < 100){…} //height of gray part

That gave me an idea. How did you allocate the image? Please show us the call to clCreateImage2D().

BTW, did you read the solution to the thread you linked to? The source image was in RGB888 format and they were reading it as RGBA8888. That sounds like the same problem you are seeing here – I knew I was missing something obvious.

Call to clCreateImage2D() :

	cl_image_format format;
	format.image_channel_order = CL_RGBA;
	format.image_channel_data_type = CL_UNSIGNED_INT8;

	cl_mem imagea = clCreateImage2D (	context,
										CL_MEM_READ_ONLY ,

If passing from 24bpp to 32bpp solves the problem, this should be RGBA. What I meant was that conversion shouldn’t be the solution, unless there is no choice, of course…or maybe work with other format?

I don’t understand. What’s the problem with converting from RGB to RGBA? It’s a simple transformation. OpenCL doesn’t support RGB888.

If OpenCL doesn’t support RGB888 (I didn’t know) , I’ll focus to RGBA, like you said.
(P.S. : Gimp : converting RGB to RGBA generate black image, ti’s an other problem)
I will try this, it should work (cf , previously linked thread).

In any case, thank you for taking time to reply and help me.
Discussion can maybe continue… : )

Respectfully !


I had this kind of problem to with RGB. The image was stripped. In fact when I wanted to write the resulting image in RGB, it was considering the alpha parameter as a color, that implied the shift.
My solution was to create a table of size (4heightwidth) for the input image. with all element table[4*i+3] = 0.
I did the same to convert the table into bitmap.

I think it’s worth mentioning why OpenCL doesn’t support RGB888 out of the box. The reason is that most hardware doesn’t support 24-bit RGB internally. Instead, it is expanded to 32-bit RGBA.

Finally, I opt for RGBA image, where it’s work fine. Orobas’s trick can be exploit, I try it but doesn’t work for some reason.

I think it’s worth mentioning why OpenCL doesn’t support RGB888 out of the box. The reason is that most hardware doesn’t support 24-bit RGB internally. Instead, it is expanded to 32-bit RGBA

OK, it should be this. we learn every day. : )