I am already regretting this - it turns out those lines do not compile on some Qualcomm mobile GPUs. On Adreno 308 (OpenGL ES 3.0 V@269.0 AU@08.00.00.312.044 (GIT@I0b59f3a7cf) ), I am getting:
So how do I properly check if the highiest bit of an int is set? I understand that if ‘A’ is an ‘ivec4’ then ‘A.x’ is a signed int?
The vComAssoc is a buffer of bitmaps - and elsewhere in the shader I check if various bits of it are set or not. Looks like there’s a problem with checking the highest one…
Ok, I see, thanks a lot GClements - actually it is simple.
I am too used to thinking about vComAssoc as a bitmap, where I don’t care if it is treated as an int or a uint by GLSL, I only care about values of individual bits…
Actually the spec is a bit confusing. My version does not have the sentence you have quoted
It is an error to provide a literal integer whose value would be too large to store in a highp uint variable.
but rather (page 25)
It is an error to provide a literal integer whose bit pattern cannot fit in 32 bits. Note:
This only applies to literals; no error checking is performed on the result of a constant
expression.
Unlike C++, hexadecimal and decimal literals behave in the same way
then, on page 26, they literally give the example:
0x80000000 // OK. Evaluates to -2147483648
So it is looking like the 0x800000000 literal constant should be ok afterall?
It should be, but your original post indicates that the Adreno compilers don’t like it. That’s a bug in the implementation; the spec is quite clear that values up to 0xFFFFFFFF are valid literals for either signed or unsigned integers.
I remember I was thinking about it back when I designed this data structure, but I decided against it because from CPU side I feed bits into it like so
Tis is Java as you can see, and Java doesn’t have an unsigned type. So I wasn’t sure what would happen if I send signed integers from the CPU and on GPU I feed it into an unsigned type.
So since at that time I didn’t need bit 32, I decided to simply make it a ivec4.
But over time I need more and more bits, now I need even the last bit 32 in this bitmap…
And the problem: since from the CPU I send signed integers, into signed integer buffer on GPU, woudn’t (on some GPUs, certainly not on mine or 10 others I’ve tested!) the highiest bit 32 just ‘disappear’ on transfer?
Ok, thanks again. I have decided to go with your option 3:
uint(vComAssoc[component].x) & 0x80000000u
Even though I feel the most correct solution is to change the type declaration of vComAssoc to ‘uvec4’. I am simply too afraid to find out what is going to explode in Qualcomm’s drivers if I dare to do this.