OpenGL Internals.

8x86 based cpus have 80bit internal precision in the FPU’s, and have done for along time. Microsoft states they always set it to 64bit precision in their runtime libraries.

Hi,

I’m not sure 100%, but I think there was a way to turn VC to support long double correctly. Also icc, gcc and bcc support it. In my experience internal 80bit precision seems to make some difference - for example some LAPACK routines (i.e. linear systems, least squares solvers) using optimized BLAS library (ATLAS for example) tend to get slightly more precise results on Athlon processors, compared to PIV, simply because ATLAS defaults to SSE2 on PIV which I think internally uses 64bits, compared to standard 80bits at x86 FPUs.
Now back on the topic - as it seems, the GPUs tend to attract more and more attention in the numerical community our days: http://multires.caltech.edu/pubs/GPUSim.pdf http://multires.caltech.edu/pubs/GPUSubD.pdf http://research.microsoft.com/~hoppe/sgim.pdf
and precision issues might affect this kind of applications, but I doubt 64-bits floats will find their way soon in mainstream GPUs. And as it seems from the first paper, single precision is sufficient for example for the implemented sparse matrix solver.

Regards
Martin

When OpenGL was invented it used floating points calculations at it’s core due to computational problems. nowadays , when transistors are no longer the limit for calculus,

Not sure if this is what you are referring to, but in the original SGI Sample Implementation (which is widely used as the basis for most ICD Windows commercial drivers), all calls to any of the glVertexX functions were passed through to glVertexXfv(). Only glVertexXfv() actually did anything. This, of course, resulted in some lost precision from glVertexXd() calls. If you download the SGI SI, you can see this in the main\gfx\lib\opengl\glcore\s_vapi.c file.

Originally posted by martin_marinov:
[b]Hi,

I’m not sure 100%, but I think there was a way to turn VC to support long double correctly.[/b]

From what I heard, there isn`t.
At least not in VC++ 5 and 6.

For VC++7 (.NET) I dont know.

PS: Switching to 64 or 80 bit isn`t the only solution. You can write yourself a FPU emulator and support the precision you need.
Anyone know of a library that does this?