Does anybody know why the authors of GLU decided to only support doubles for methods such as gluProject, gluUnproject, and gluTessVertex? I find it odd since GL is type friendly in almost every other part of the API, and most systems are optimized for floats. To this point, I have left my data as doubles, but have been persuaded to change them to floats to maintain compatibility with some legacy code. I realize I can write floating point versions. I am just curious about the design decision.
For GL, way back then it was assumed that perhaps one day video cards that support doubles would become available although that didn’t happen yet.
GLU is not part of GL.
I don’t know the reason. There are other “math libs” and “utility lib”
Lots of libs here
Math++, GLM, Graphics Library Helper (that one is mine)