The float has 24 digits available which means a 2 * 2^24 = 33554432 (more than 33 million) units wide cube is available for the world (times 2 because of the negative floats). But you need to put some digits in reserve because of transformations (adds and mults). I suppose how far the far plane is affects how many digits you put there due to adds, but how many more because of mults? Can one find out only through trial and error?

I believe IEEE 32-bit float is s23e8, which means a sign bit, 23 bits of mantissa precision, and 8 bits of exponent.

log10( 2^23 ) = 6.9, which means you have about 7 decimal digits of mantissa precision.

And as usual with floating point, for small numbers (0…1) you have quite a bit of fractional precision. But the larger your numbers get, the less and less you have.

Note that while you have ~7 decimal digits of precision, if you’re computing your numbers using well conditioned functions in 32-bit float, you’re doing good to keep 4 decimal digits of accuracy.

So I think the answer to your question is, it depends on the magnitude of your coordinates and whether you are using well conditioned functions to operate on them.

I didn’t actually want to know the amount of decimal digits. Binary digits are digits too. I’ve written 24 digits based on the fact that the mantissa (significand) is kept normalized at all times and so the first lit 1 in the mantissa is thrown out. If your world’s origin is the center, then you can use the negative portion of the floats.

My question, in rephrased form, is how to scale the world. How do you select the smallest representable unit of your world, say 1 millimeter, or 1 centimeter? Say you don’t want to lose accuracy of representation, you want that 1 representable unit of accuracy always. Are there rules of thumb about how many (binary) digits to keep in reserve?

That is why people do transformation in doubles in the cpu, so as to minimise the number of computations done with float.

How do you select the smallest representable unit of your world, say 1 millimeter, or 1 centimeter? Say you don’t want to lose accuracy of representation, you want that 1 representable unit of accuracy always

Not an expert on large scale renders, but this should be relative, not absolute, accuracy. “At 1km of distance, 10cm accuracy is enough” etc.

Then any large scale world should have some kind of LOD for distant objects.

So? How about how to set the smallest representable unit for the highest LOD?