about glTexImage2D problem

I have two array that contains pixels that glTexImage2D needed .
As u know , the two are equal in the memory . i pass pixels data to glTexImage2D .
but they produce different results.
i dont’know why .

GLubyte pixels[4 * 4] =
	255,   0,   0 , 255, // Red
	0, 255,   0, 255,// Green
	0,   0, 255, 255,// Blue
	255, 255,   0 ,255// Yellow

// GLuint pixels2[4 ] =
// {
// (255L<<24)+(0<<16)+ (0<<8)+255,
// (0<<24)+(255L<<16)+(0<<8)+255,
// (0<<24)+(0<<16)+ (255L<<8)+255,
// (255L<<24)+(255L<<16)+(0<<8)+255,
// };

glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA, 2, 2 , 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels );

thank u in advance. any feedback will be appreciate.:slight_smile:

You are wrong, those two arrays are not equivalent on a system using little-endian format (http://en.wikipedia.org/wiki/Endianness). Also, this is not really an OpenGL question.

second array is GLuint and you provide GL_UNSIGNED_BYTE as a parameter to glTexImage2D. so it interprets second array incorrectly. you’d better converting Gluint array to GLubyte before feeding it to glTexImage2D. opengl is not very friendly with integer data(it’s a bit trickier to work with and it requires extensions) and i doubt you really need it.

but if you do:

this answer was made assuming that arrays are really equal, but the post above mine says they are not. i didn’t really bother interpreting this mess. even size of the arrays is “4*4” in the 1st case and just “4” in the 2nd.

Ignore the comment of Nowhere-01, he/she is confusing things. The problem is the endianness. There is no “incorrect interpretation”. Everything is well defines and what he is referring to (integer texture formats) are a completely different thing.

ok. i didn’t pay attention to contents of arrays. but i don’t get why would you do it like that. confusing comment was removed. it is really endianness issue.

oh ~ i get it. endianness problem. my fault. thank u mention it.:stuck_out_tongue: