I am currently working on a SPIR-V to GLSL decompiler and I am confused about the behaviour of signed integers.
The description of most instructions (e.g. OpIAdd) does not specifiy if both integer operands must have the same type and signedness. I would assume that they don’t have to but to my understanding that would defeat the purpose of the integer type distinction. Also the OpenGL reference compiler inserts bitcast instruction to cast beforehand.
The other thing that confuses me is the OpIMul instruction. Is it signed or unsigned? If it is one of those, how is the other encoded? Or does it depend on the signedness of the type? The reference compiler generates an OpIMul instruction both for signed and unsigned multiplications. But how does it work in OpenCL kernels where all types are unsigned? Why is it different for division? There are separate OpUDiv and OpSDiv instructions.
Thanks in advance for clarifying.