Images with integer data types don't seem to work in openGL 4.5 or 4.6

I’ve been trying half the day to use various sized integer based images in my compute and fragment shaders. No matter what I do, I can’t get any data to eitehr store into them or read from them (it’s unclear which side the problem is on…).

Right now I’m using this in my compute shader:

layout(binding = 1, rgba32ui) uniform uimage2D o_uvec4_texture;
...
imageStore(o_uvec4_texture, terrain_texel_coord, uvec4(0, 0, 0, uint(value)));

Then in my fragment shader:

layout (binding = 1) uniform usampler2D u_uvec4_texture;
...
uint is_land = texture(u_uvec4_texture, texture_uv).w;

And I’m using these GL enums when initialising/binding/etc. the texture on CPU side:

GL_RGBA32UI, GL_RGBA, GL_UNSIGNED_INT

If I change all the texture parameters to be a vec4 (which I’m successfully using all over my codebase) then I get the data I expect… but unfortunately I need to do some bit twidling, so I really need integer types specifically in this case.

Any ideas what I’m doing wrong? I’ve spent ages going through GLSL and openGL docs and it seems I’m doing everything right as far as I can tell

ADDITIONAL INFO AFTER COMMENTS:

OS: Windows
Graphics card: is an NVidia GeForce RTX 3070 Ti
Driver: GeForce game ready driver, v555.85 (released 21st May, 2 days ago)

What GPU, OS and driver version are you testing on?

1 Like

After a quick glance, 3 questions:

  1. Are you checking for GL errors?
  2. You’re using image writes (side effects). So are you sure you have “correct” synchronization to make sure that the frag shader will actually see the correct values?
  3. texture() typically samples with interpolation including MIPmaps and aniso depending on MIN/MAG_FILTER/MAX_ANISO/etc. and the current derivatives in the frag shader. To avoid this entanglement, I would just use texelFetch() and grab a specific texel from a specific level of the GL_RGBA32UI texture. Get that working. Then if you need filtering, go back and fix what isn’t set properly about the filtering params.

Related to #3:

SUGGESTION: As an intermediate step, I’d suggest disabling the whole compute shader write of an integer texture thing, create+populate one from the CPU side using standard GL calls (glTexStorage2D(), etc.), and feed that into your frag shader. Make sure “that” simple thing actually samples properly in your frag shader. Then, bolt on the compute shader gizmo.

1 Like

Added extra info at the end of the question to cover this, thanks for suggestion that!

Re. 1) Errors:

I check for shader compilation errors currently, and I was seeing the shader complain about either the format I gave for uimage2D, trying to use image2D with an integer format, and complaints about no overload for imageLoad() when I was getting stuff wrong still… The shader compiler at least isn’t seeing any errors on either the compute or frag side anymore… I’ll check for errors when I’m binding and dispatching and such though, that’s a good shout. I should build that into my classes shouldn’t I

Re. 2) When you say correct synchronisation, you mean the frag shader is definitely running after the compute shader so the values are definitely populated already?

In which case yes, I know for sure that’s true. I’m running all the compute shaders before rendering begins only, and the other compute shaders I have running before and after it all have the correct data populated when rendering begins… Also when I change the texture and all creation/binding details etc. to use vec4 instead of uvec4, then I see the buffer is populated correctly… It’s specifically the integral images/textures that aren’t working for me (or I’m doing something wrong with)

I think 2) kind of covers 3). I am successfully doing this with 8 or 9 other vec4 image2D’s being generated by compute shaders and then read by the fragment shader all over the place, and all of them are constructed with the same parameters including texture size (except uint4 version in this 1 case)

Ok, I’m getting error 1282 (invalid operation) when calling glTexImage2D() during initialisation of the uint4 texture specifically. Given that the same functions runs for the vec4’s identically just with the different openGL data type enum values, I can only guess that means I’ve made a mistake with them…

GL_RGBA32UI, GL_RGBA, GL_UNSIGNED_INT these are definitely the values I’m using for hte uvec4 initialisation, I’ve got a map with a struct that holds these three enums per data type that I plug into the calls…

In this case, that would look like this:

    glTexImage2D(GL_TEXTURE_2D,
                 0,
                 texture_data_type_enums.sized_internal_format, // GL_RGBA32UI
                 size.x, size.y,
                 0,
                 texture_data_type_enums.composition_format,    // GL_RGBA
                 texture_data_type_enums.element_type,          // GL_UNSIGNED_INT
                 data);

Oh man… I completely missed the fact that al the GL_RGBA like enums have an interger version… that seems to have fixed it.

Why on earth do they need reassuring that the texture has integers in so many places…

c++ side:
GL_RGBA32UI instead of GL_RGBA32F…
GL_RGBA_INTERGER instead of GL_RGBA
GL_UNSIGNED_INT instead of GL_FLOAT

glsl side:
rgba32ui instead of rgba32f
uimage2D instead of image2D

I know I’m not in a position to comment on graphics driver/library design, but surely GL_RGBA32UI on it’s own tells the whole story on it’s own c++ side, and rgba32ui could be enough to tell which type of image2D it is instead of needing a sperate uimage2D and iimage2D…

Anyway, that’s hardcoded in my library now, so I only ever need to be sure on the GLSL side from here on…

Thanks for the help!

You might consider that there are separate concepts involved here:

  • GL internal format – The “actual” internal format of the texel data on the GPU (**)
  • GL external format+type – The format of the texel data block that you’re uploading from the CPU

These are two different things. And GL supports some texel conversions from external to internal format during the upload. So you need to tell it what the format of each is.

As to why the external format is split across a format and type, rather than merged into “one thing” like GL internal format, I think that’s just for compatibility with legacy APIs like glTexImage2D(). Newer texture create APIs like glTexStorage2D() jettison the whole uploading-during-creation thing and thus don’t make you provide the GL external format+type, but instead just the GL internal format.

** = Assuming the GPU+driver supports it natively

This separate typing on the GLSL side is so that the shader can be largely understood, compiled, and linked without having a monster application in-tow providing runtime introspection of then-active texture and buffer bindings and types then currently active. Plus, there’s some GLSL syntax that just makes no sense depending on the texture and buffer formats. So it’s easiest to have the shader just be given this stuff.

Same thing in C++ where you define the types of the parameters passed to functions. Without types, the compiler has no idea how to precompile this stuff to object code (shader IR or ISA)

Sure thing!