This is basically a poll to see which data type ppl prefer.
How does this relate to OGL?
Due to the extra precision of a double, it takes longer to compute than an equivalent calculation with float.
Why use double at all?
Some applications require the extra precision of double for adequate accuracy.
I for one always use double in place of float. Although programming isn’t so deep that an inaccuracy of 10^-10 will factor into 10^+10, I like to have the accuracy just so I know things are running as pefectly as possible.
Has anyone noticed a sizeable speed boost using float instead in their applications, or are you all perfectionists who use double for the same reason I do?