Compute Shader binding and image type

Dear guys

I am trying to do a simple compute shader program … and there is an error on compiling time.

This is my simple shader…basically computing a histogram from a 16 bits grayscale imput image. Very basic, and easy task…but I am stuck with the image2D types :slight_smile:

It is supposed that my machine supports opengl 4.5 … I updated the drivers, and I download GLAD with OpenGL 4.3 support… the host program compiles and run, but the compute shader program does not compiles due to a weird error (r32ui and r16ui) I would want to use integer types… I do not want to convert types into floating point because it will add latency in the program… what can I do? Why r16ui and r32ui is not supported??

#version 430 core
layout (binding=0, r16ui) readonly  uniform image2D input_image;
layout (binding=1, r32ui) coherent  uniform image1D histogram;
layout (local_size_x = 16, local_size_y = 16) in;
void main()
{
    int gray = imageLoad(input_image, gl_GlobalInvocationID.xy).r;
    imageAtomicAdd(histogram, gray, 1);
}

COMPILING ERROR:

ERROR: 0:2: '' : image layout format qualifier does not match image type
ERROR: 0:2: '' : image layout format qualifier does not match image type
...

it is fixed when I change r16ui and r32ui to r32f … but I dont want that :frowning:

let me know if I am doing somethig wroing…

thank you
freelancerLatino1 at gmail dot com

I already installed the last GPU drivers (I have Intel UHD 620, integrated GPU )… and I used http://glad.dav1d.de to generate the GLAD library for OpeNGL 4.5 and OpenGL 4.3 … none of then can compile my compute dhader code properly

S.O.S.

3 days stucked with this weird problem … I am getting bald ;(

For an unsigned integer image format, the variable type needs to be uimage2D or uimage1D.

Dear CGlements

You are totaly right! It was…

Thank you

Hello guys

I have another issue now. And I have got stucked again. This is my compute shader

#version 430 core
layout (binding=0, r16ui) readonly  uniform uimage2D input_image;
layout (binding=1, r32ui) coherent  uniform uimage2D histogram;
layout (local_size_x = 1, local_size_y = 1) in;

void main()
{
    ivec2 p = ivec2(gl_GlobalInvocationID.xy);
    int gray = int(imageLoad(input_image, p).r);
    imageAtomicAdd(histogram, ivec2(gray/256, gray%256), 1);
};

So, I pass a buffer image (input) and a histogram of 65536 entries split into a 256x256 output image. All is fine … but, after dispatching, I want to get back the histogram into main memory.

So, I do the following:

...
glBindImageTexture(0, tex_input, 0, GL_TRUE, 0, GL_READ_ONLY, GL_R32UI);
glBindImageTexture(1, tex_output, 0, GL_TRUE, 0, GL_READ_WRITE, GL_R32UI);

glUseProgram(gComputeProgram);
glDispatchCompute(1, 1, 1);
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);

glBindImageTexture(1, tex_output, 0, GL_TRUE, 0, GL_READ_WRITE, GL_R32UI);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
glGetTexImage(GL_TEXTURE_2D, 0, GL_RED, GL_UNSIGNED_INT, histogram);

and I get INVALID_OPERATION just in glGetTexImage … I have not a clue about what to do, and I need help.

By the way, “histogram” is the same buffer that I passed to the texture object

glGenTextures(1, &tex_output);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, tex_output);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, 256, 256, 0, GL_RED_INTEGER, GL_UNSIGNED_INT, histogram);

and the input the the image itself

glGenTextures(1, &tex_input);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_input);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_SHORT, buffer);
...

I would appreciate your help …I have more than 1 week stuck on it
Thank you

The format should be GL_RED_INTEGER, not GL_RED.

Oh…you are right! …
Thank you!
Now, I am amazed that I am just getting 1 non-zero entry in the whole histogram with the value of “1” … hmmm … and it is happening only in the entry 0

I found it…I was dispatching only 1,1,1 … and it should be width, height, 1, to consider all the image pixels.

Now, the point is that all the pixels instances are added in histogram[0] … that´s odd
Two possibilities:

  1. the input image is considered full zeros (it is not true)
  2. gl_GlobalInvocationID.xy is always (0,0), and the gray level in that pixels is zero?

What could be happeninig?
I can debug it somehow, but it is rare…

ok, gl_GlobalInvocationID.xy is not always (0,0) …it seems to be ok … now, the only one possibility is that the imageLoad is getting always zero… however, there were not errors uploading the input image. What could it be this time? …

Here, the way that I use to load the image into texture mem, and into compute shader

GLuint tex_input;
glGenTextures(1, &tex_input);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_input);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, width, height, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
//... later, when the image is available...
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tex_input);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RED, GL_UNSIGNED_SHORT, buffer);
glBindImageTexture(0, tex_input, 0, GL_TRUE, 0, GL_READ_ONLY, GL_R16UI);

Hello

I solved it in this way:

glTexImage2D(GL_TEXTURE_2D, 0, GL_R16UI, width, height, 0, GL_RED_INTEGER, GL_UNSIGNED_SHORT, NULL);
//... later, when the image is available...
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RED_INTEGER, GL_UNSIGNED_SHORT, buffer);
glBindImageTexture(0, tex_input, 0, GL_TRUE, 0, GL_READ_ONLY, GL_R16UI);

Also, I changed the compute shader byt inverting the DIV and MOD

imageAtomicAdd(histogram, ivec2(gray % 256, gray/256), 1);

now works!

thanks anyway