epsilon requirements

hi,
I’m SW Engineer in a OpenGL SW Verification project. We have been testing a set of gl functions that will run on an embedded PPC platform that never tested before. While writing test steps for the gl functions that takes floating number parameters we observed that the values we get are not same as the values we set.

i.e
//In the below code color is set to #800000 then the current color is tested

GLdouble afColorValues[4];
glClear(GL_COLOR_BUFFER_BIT);
glColor4d(0.5,0.0,0.0,0.0);
glGetDoublev(GL_CURRENT_COLOR,afColorValues);
printf("%.16f",afColorValues[0]); // print the value of red

but when I got the RED value of the current color with glGetDoublev I saw that it was

0.5019608139991760

Here there is a 0.0019608139991760 difference between the value set and obtained. I cannot say that this step is either passed or failed because I do not have any official epsilon value to define the acceptable precision error range. I wonder if there are any epsilon requirements that define the acceptable precision for those gl functions.

Such a requirement would be

i.e

“glColor4d shall set color values with a precision of epsilon=0.00001”

Is there any requirement like that? If so how can I get those epsilon values?

from the OpenGL 1.5 spec section 2.11:

We require simply that numbers’ floating-point parts contain enough bits and that their exponent fields are large enough so that individual results of floating-point operations are accurate to about 1 part in 10^5.
Probably should check the spec for more details, exceptions, etc…

For the specific case you listed, it sounds like colors are being stored as 8 bit values. Not quite up to the spec if i read it right, but may produce ‘good enough’ results if your embedded system needs speed more than quality (slow processor and/or limited color depth display)…