A pointer for those who follow this. I am not sure why, but reading back GL_FLOAT with glReadPixels does not give the correct value.
I rendered a 0.5 red triangle and the value returned in the pixel space is 0.501961
If you use R = 1.0f then you get 1.0f, for other values it doesn’t seem to work. From the glReadPixels man page, there should be no conversion process when using float. nevertheless something goes wrong.
I disabled lighting, color material, smooth shading (using FLAT), normals, cleared color,depth buffers, using a 0,0,0,0 clearcolor.
unsigned char colors[3*(facets+1)] = {0};
// distribute over 3 colours as one number
int pi = 255*255*255 / (facets+1);
int cval;
for (i = 0; i < facets; i++)
{
cval = pi * (i+1);
// shift 8 bits
cval = cval << 8;
memcpy(colors + 3*i, &cval, 3);
}
// disable any lights, smoothness, materials etc
glClearColor(0,0,0,0);
// render here using:
glColorubv(colors + 3*current_facet);
// swapbuffers
glPixelStorei( GL_UNPACK_ALIGNMENT, 1 );
glReadPixels(0,0,width,height, GL_RGB, GL_UNSIGNED_BYTE, pixels);
int visible[facets] = {0};
for ( y ... )
for (x ...)
if (pixel is black) continue;
for each color used C
if (visible[C]) break;
if pixel color is one of those used
{
visible[C] = 1;
break;
}
well this pseudo-C should be enough, it works wonderfully. The point is GL_FLOAT pixmaps beware!
OSX 10.5 Leopard, Xcode 3.1.2
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: ATI Radeon X1600 OpenGL Engine
OpenGL version string: 2.0 ATI-1.5.36