Floating-point arithmetic precision



I know this question maybe does not belong to OpenGL.It has even been researched at many papers I read. But I still didn’t solve it better and wish someone can help me in this year.
When a surface cut another one, due to the precision of Floating-point arithmetic the contour tracing often fails. For example, two intersection points,p0(24.234,23.4565,-34.034) and p1(24.234,23.4565,-34.034), can be said that they did not equal in OpenGL. After I trim the precision, they can be accepted as equal points, but other questions maybe occur with the adjusted precision, for two points can be accepted as equal,while they are not equal in fact.

Another question is some errors happen between float and double. For example, When I use glClipPlane(GL_CLIP_PLANE0,eqn) to cut an object, float can be defined in the object, while double can be described at eqn in OpenGL. But I can’t convert float to double in the object due to some reason.
How do I do?



If you’re trying to create “perfect” geometry, then you have to use rational numbers with some large-number capable package.

Be warned that even with a very good factorizer, you will quickly run into the hundreds of digits of necessary precision. It won’t be fast. It helps if you can re-quantize all output to some specific resolution post-operation.

OpenGL implementations are free to choose a precision used internally, and downgrade doubles to whatever that implementation precision is. Most current cards do 32-bit floats. However, you’re not supposed to do your math inside the GL; it’s quite precise enough for display if you pre-scale and pre-translate your data using your own, high-precision math.

I think you over-estimate the precision of software. Catia has a very inaccurate math model and most companies insist you do not raise it. Even if you do catia will drop the precision back down to the default after certain operations. This is the reason why complex surfacing sucks in catia. A compliant IEEE 32-bit floating point engine defines the smallest value. The absolute smallest value (don’t remember off the top of my head). This is the smallest value that when multiplied times 1 will return 0. Learn to check your numbers based on subtraction and an epsilon value that is small enough to give you the precision you need. All software does it. Thats just the way it works.


Smallest number between 0.0 and 1.0 is 0.00006 (epsilon defined in float.h), if Im not mistaken. The thing with floats is that precision is not constant as you increase your numbers (or decrease) and doing calculations with numebers that vary greatly in magnitude gives inprecise results.

Bottom line, your app needs to decide on the final value after computation. Increasing the precision doesn’t help a lot, but makes it better.

Some scientific software an FPU emulator capable of 256 bit for these reasons.