Integers in GLSL 1.20 specification

Hi everyone,

I’m writing a simple shader that writes some data on a framebuffer. In particular this data are integers and the meaningful range goes from 0 to screenWidth.

At the moment I’m writing onto a RGBA-UNSIGNED_BYTE texture this way:

int ssInt=/*MyValue*/

int byte=256;

int b0=mod(ssInt, byte);                    
int b1=mod(ssInt / byte, byte); 
int b2=mod(ssInt / (byte*byte), byte);
int b3=mod(ssInt / (byte*byte*byte), byte);

gl_FragColor=vec4(float(b0)/255.0f, float(b1)/255.0f, float(b2)/255.0f, float(b3)/255.0f);

This thing I’m working on is meant to be compatible with really old versions of Opengl (hence the GLSL 1.20 constraint).

Since the OpenGL specification for Glsl 1.20 says:

4.1.3 Integers
Integers are mainly supported as a programming aid. At the hardware level, real integers would aid
efficient implementation of loops and array indices, and referencing texture units. However, there is no
requirement that integers in the language map to an integer type in hardware. It is not expected that
underlying hardware has full support for a wide range of integer operations. Because of their intended
(limited) purpose, integers are limited to 16 bits of precision, plus a sign representation in both the vertex
and fragment languages. An OpenGL Shading Language implementation may convert integers to floats to
operate on them. An implementation is allowed to use more than 16 bits of precision to manipulate
integers. Hence, there is no portable wrapping behavior. Shaders that overflow the 16 bits of precision
may not be portable.

I wanted to know what could happen with the code above regarding this 16 bits of precision thing.
Also feedback about that potentially non-sense shader will be very appreciated.

In the end I’d also like to know if an RGBA8 format is really faster to read from than a single channel format with the same pixel size overall.

How is ssInt generated? 16 bits of precision means that any operation generating a value greater than 65536 isn’t guaranteed to calculate the least signifcant bits correctly. E.g. 65536+1 may result in 65536 or 65538. What the standard is saying is that GPU registers can be floats and those floats might only have a 16-bit significand, so any additional bits would be lost to rounding.

If ssInt is less than 65536, b2 and b3 will be zero.

BTW, if you’re sticking to OpenGL 2.1, bear in mind that the maximum texture size isn’t guaranteed to be any larger than 64×64 (it was increased to 1024×1024 in 3.0).

1 Like

Thanks for the detailed answer, where can I find all those infos (like the guaranteed texture size for a certain version of Ogl)?

Each version of the specification has a section entitled “State Tables” which covers both state variables and implementation-defined limits.

This is on page 298 of the OpenGL 2.1 spec, but in practice the OpenGL spec is being ultra-conservative here and you can certainly rely on the size being larger. If you do encounter a 3D card that only supports 64x64 textures it will either be in a museum or will belong to the type of user you don’t want to support (in other words, all of their other system components will be horrendously out of date as well, and you’ll be lucky if you’re able to test on their OS version, never mind compile).

It does highlight an important point though. If you are intentionally targeting a downlevel GL version for compatibility reasons, chances are that you’ve actually implicitly broken compatibility elsewhere in your toolchain. Maybe you require an amount of memory that no downlevel hardware actually even had. Maybe you require an OS version that drivers for downlevel hardware are not available for. Maybe your program runs too slow to be useful on downlevel hardware. Maybe it’s something else.

The point here is that using an older GL version isn’t enough to give you compatibility on it’s own. You need to examine the entire system, and chances are that your compatibility constraint is artificially self-imposed rather than real.

1 Like