I’m writing a simple shader that writes some data on a framebuffer. In particular this data are integers and the meaningful range goes from 0 to screenWidth.
At the moment I’m writing onto a RGBA-UNSIGNED_BYTE texture this way:
int ssInt=/*MyValue*/ int byte=256; int b0=mod(ssInt, byte); int b1=mod(ssInt / byte, byte); int b2=mod(ssInt / (byte*byte), byte); int b3=mod(ssInt / (byte*byte*byte), byte); gl_FragColor=vec4(float(b0)/255.0f, float(b1)/255.0f, float(b2)/255.0f, float(b3)/255.0f);
This thing I’m working on is meant to be compatible with really old versions of Opengl (hence the GLSL 1.20 constraint).
Since the OpenGL specification for Glsl 1.20 says:
Integers are mainly supported as a programming aid. At the hardware level, real integers would aid
efficient implementation of loops and array indices, and referencing texture units. However, there is no
requirement that integers in the language map to an integer type in hardware. It is not expected that
underlying hardware has full support for a wide range of integer operations. Because of their intended
(limited) purpose, integers are limited to 16 bits of precision, plus a sign representation in both the vertex
and fragment languages. An OpenGL Shading Language implementation may convert integers to floats to
operate on them. An implementation is allowed to use more than 16 bits of precision to manipulate
integers. Hence, there is no portable wrapping behavior. Shaders that overflow the 16 bits of precision
may not be portable.
I wanted to know what could happen with the code above regarding this 16 bits of precision thing.
Also feedback about that potentially non-sense shader will be very appreciated.
In the end I’d also like to know if an RGBA8 format is really faster to read from than a single channel format with the same pixel size overall.