Vertex shader compilation failed. ERROR: 0:44: '0x80000000' : Integer overflow

Target: OpenGL ES 3.0

Yesterday I released a new version of my app. In it, I have made a small change to one of the vertex shaders. The following new lines appeared:

if( (vComAssoc[component].x & 0x80000000) != 0 )
{
v = vec3(0.0, 0.0, 0.0);
}

I am already regretting this - it turns out those lines do not compile on some Qualcomm mobile GPUs. On Adreno 308 (OpenGL ES 3.0 V@269.0 AU@08.00.00.312.044 (GIT@I0b59f3a7cf) ), I am getting:

Vertex shader compilation failed. ERROR: 0:44: ‘0x80000000’ : Integer overflow

On Adreno 650 (OpenGL ES 3.0 (4.0.0 - Build 10.18.10.5161) ) I am getting

ERROR: 0:44: ‘’ : syntax error: ERROR___INTEGER_CONST_OVERFLOW

Overflow? How come? Aren’t integers supposed to be 32 bit?

I forgot to mention - I am using high precision here:

precision highp int;

Reading the spec ( https://registry.khronos.org/OpenGL/specs/es/3.0/GLSL_ES_Specification_3.00.pdf - page 26) - it is looking like maybe I should have used an explicit uint here instead:

0x80000000u

But wait - if I do

vComAssoc[component].x & 0x80000000u

I am going to get a ‘type mismatch’ (spec page 26) as vComAssoc is a buffer of ivec4’s:

layout (std140) uniform componentAssociation
{
ivec4 vComAssoc[MAX_COMPON];
};

So how do I properly check if the highiest bit of an int is set? I understand that if ‘A’ is an ‘ivec4’ then ‘A.x’ is a signed int?

The vComAssoc is a buffer of bitmaps - and elsewhere in the shader I check if various bits of it are set or not. Looks like there’s a problem with checking the highest one…

Both of these implementations contradict the specification, which says:

As to what to do about that: you can try any of:

  1. int highBit = int(0x80000000u), then vComAssoc[component].x) & highBit.
  2. uvec4(vComAssoc[component]).x & 0x80000000u
  3. uint(vComAssoc[component].x) & 0x80000000u

IOW, explicitly convert both to the same type (both to int or both to uint) with constructors.

Ok, I see, thanks a lot GClements - actually it is simple.

I am too used to thinking about vComAssoc as a bitmap, where I don’t care if it is treated as an int or a uint by GLSL, I only care about values of individual bits…

Actually the spec is a bit confusing. My version does not have the sentence you have quoted

It is an error to provide a literal integer whose value would be too large to store in a highp uint variable.

but rather (page 25)

It is an error to provide a literal integer whose bit pattern cannot fit in 32 bits. Note:

  1. This only applies to literals; no error checking is performed on the result of a constant
    expression.
  2. Unlike C++, hexadecimal and decimal literals behave in the same way

then, on page 26, they literally give the example:

0x80000000 // OK. Evaluates to -2147483648

So it is looking like the 0x800000000 literal constant should be ok afterall?

It should be, but your original post indicates that the Adreno compilers don’t like it. That’s a bug in the implementation; the spec is quite clear that values up to 0xFFFFFFFF are valid literals for either signed or unsigned integers.

Furthermore, one more problem I just thought about:

like I said, the vComAssoc is a bitmap. Looks like the most correct solution is to change the type declaration of vComAssoc to an uvec4:

layout (std140) uniform componentAssociation
{
uvec4 vComAssoc[MAX_COMPON];
};

I remember I was thinking about it back when I designed this data structure, but I decided against it because from CPU side I feed bits into it like so

GLES30.glBindBufferBase(GLES30.GL_UNIFORM_BUFFER, ASSOC_UBO_BINDING, indexA);
GLES30.glUniformBlockBinding(programH, mAssocBlockIndex[variant], ASSOC_UBO_BINDING);

Tis is Java as you can see, and Java doesn’t have an unsigned type. So I wasn’t sure what would happen if I send signed integers from the CPU and on GPU I feed it into an unsigned type.

So since at that time I didn’t need bit 32, I decided to simply make it a ivec4.

But over time I need more and more bits, now I need even the last bit 32 in this bitmap…

And the problem: since from the CPU I send signed integers, into signed integer buffer on GPU, woudn’t (on some GPUs, certainly not on mine or 10 others I’ve tested!) the highiest bit 32 just ‘disappear’ on transfer?

No. Java has to call OpenGL via its C ABI, which doesn’t enforce types; it’s just bytes at that point.

Both Java and GLSL require signed integers to use two’s complement representation.

Ok, thanks again. I have decided to go with your option 3:

uint(vComAssoc[component].x) & 0x80000000u

Even though I feel the most correct solution is to change the type declaration of vComAssoc to ‘uvec4’. I am simply too afraid to find out what is going to explode in Qualcomm’s drivers if I dare to do this.