The spec on page 9 says:
‘All implementations must define VG_MAX_FLOAT to be at least 10^10.’

Yet on the previous page, it says:
“However, implementations may clamp extremely large or extremely small values to a restricted range, and internal processing may be preform with lesser precision. At least 16 bits mantissa, 6 bits of exponent, and a sign bit must be present, allowing values from -+2^(-+31) to be represented with a fractional precision of at least 1 in 2^16.”

I read this is as the exponent biased by 31.

But 2^(((2^6)-1)-31 [=32])  =  4294967296
and  10^10                  = 10000000000

So should VG_MAX_FLOAT be smaller (please let it be this case), or there be 7 bits of exponent in the internal representation of the floats?