is there a default data type that (all?) opengl implementations use for internal representation and calculation of coordinates?
That is important for me because if internal data type is 32-bit-float, round errors will occur on big numbers. Knowing the internal data representation i can calculate how big my numbers may be before reaching critical rounding precision.
The OpenGL spec. does not specify any internal format, but most OpenGL implementations use 32 bit float for internal representation of vertices,colors and normals as far as I know.
I think colors are mostly in 8 bit per color component (apart from very new texture formats).
I guess you could check out the gl.h file to see what are the formats. It could differ (though I doubt it) according to the implementation.