glTexImage2D: Resulting texture not matching passed data

I am using the following code in a small test program to load a mipmap into texture memory:

GLenum format = (mipmap.getMap(0)->getChannels() == 3) ? GL_RGB : GL_RGBA;

for(uint16 n = 0 ; n < mipmap.getLevels() ; n++)
{
glTexImage2D(GL_TEXTURE_2D, n, mipmap.getMap(n)->getChannels(), mipmap.getMap(n)->getWidth(), mipmap.getMap(n)->getHeight(), 0, format, GL_UNSIGNED_BYTE, mipmap.getMap(n)->getElementData());
}

This works perfectly well in all instances (for both square and rectangular textures) except for one case. When loading a 2x2 texture, one of the texels on the second row has an incorrect color value.

Now I am not a total newbie, I have checked my code thoroughly. After many debugging sessions, I could find nothing wrong.

So instead of mipmapping I created a blank 2x2 image and loaded it without using mipmapping. I checked that the channels, width, height and data fields all contained correct data the moment before glTexImage2D was called. And sure enough, channels == 3, width == 2, height == 2, and data contained an array of 12 zeroes (new image made black by default).

Yet still there was an error. One of the texels on the second row was colored 0,255,255 instead of 0,0,0.

I have no idea how to fix this. It puzzles me that 1x1, 1xn or nxn textures all work, yet 2x2 textures do not.

Aulcard the frustrated.

Edit:

I have also done a little extra investigation. In order to experiment I decided to manually create / pass image data rather than using my Image class:

const GLushort imgChannels = 3;
const GLushort imgWidth = 2;
const GLushort imgHeight = 2;

GLubyte imgData[14]; // imgChannels * imgWidth * imgHeight + 2
for(GLushort c = 0 ; c < 14 ; c++) imgData[c] = 0;

glTexImage2D(GL_TEXTURE_2D, 0, imgChannels, imgWidth, imgHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, &imgData);

In this experiment I started with an array of 12 GLubytes. Once again I got a texel colored 0,255,255. When I changed to 13 GLubytes, the result was 0,0,255. And upon changing to 14, I finally got an entirely black image. This does not make sense. A 3 channel 2x2 image will always only be 12 bytes, and likewise a 4 channel 2x2 image will always only be 16 bytes. Why then I wonder is OpenGL reading in 14 bytes?

Obviously I am doing something wrong. Do I need to input a 4 channel image? (Even though I have specified internalformat to 3 and format to GL_RGB)

Aulcard the perplexed.

… with yet more playing around I have realized that OpenGL is trying to read in an RGBA image whether I specify GL_RGB or GL_RGBA. That is to say when it behaves identically for both. I set the first 8 GLubytes in imgData to 255, and set GL_RGBA. As expected, two white and two black texels. When I set GL_RGB, there should have been two white, one black and one colored texel. Instead I still had only two white and two black texels.

So now I am … even more perplexed.

[This message has been edited by Aulcard (edited 10-06-2003).]

The problem is the unpack alignment. The default alignment is 4, which means every row in the image must start on an offset from the start address (the pointer you pass to TexImage2D) that is a multiple of 4. A 3-channel image (assuming 1 byte per channel) with 2 pixels per row is not aligned properly (3*2=6 is not a multiple of 4). So set alignment to 2 or 1 before uploading the texture.

glPixelStorei(GL_UNPACK_ALIGNMENT, 1 or 2);

A 4-channel image with 2 pixels per row is alignmed properly (4*2=8 which is a multiple of 4). It’s not that OpenGL tries to read RGBA instead RGB, it’s just that the read pointer needs to be aligned after each row.

[This message has been edited by Bob (edited 10-06-2003).]

Originally posted by Bob:

[quote]

glPixelStorei(GL_UNPACK_ALIGNMENT, 1 or 2);

[/QUOTE]

Thank you, my problem has been solved.