# floating type accuracy in opengl

In OpenGL 1.5 Spec. 2.1.1 Floating-Point Computation section, 2^32 was refered as the maximum representable magnitude of floating point numbers.

“…The maximum representable magnitude of a floating-point number
used to represent positional or normal coordinates must be at least 2^32; the maximum
representable magnitude for colors or texture coordinates must be at least 2^10.
The maximum representable magnitude for all other floating-point values must be
at least 2^32. x·0 = 0 ·x = 0 for any non-infinite and non-NaN x. 1 ·x = x·1 = x.
x+0 = 0+x = x. 00 = 1…”

Even there are 64 bit types, 2^64 accuracy was not mentioned. Can we simply say that none of the gl functions return floating point values in double (64 bit) precision?

The maximum representable value does not equal precision.

In general, GL demands a precision of 1 in 10^5 at minimum which is below the IEEE 32 bit float we have today. GL needs about 17 bits. For color, I don’t know.

I think all GPUs out there, including software implementations use 32 bit floats.
And GPU’s FPU units are not totally IEEE 754 complient for speed benifits.

Download Mesa (mesa.org) and modify it so that it uses 64 bit floats if you need it.