Texture dimensions not power of 2

Red book 7th ed says:
“For OpenGL implementations that do not support version 2.0 or greater, both width and height must have the form 2^m + 2b, … For OpenGL implementations supporting version 2.0 and greater, textures may be of any size.”

My card support 3.3 but non-power-of-2 dimensioned textures (e.g., 480 x 480) don’t come out right. Powers of 2 are fine though. My textures are in bmp format if that matters. Any idea why?

Thanks in advance,

check your image loading algorithm. bmp uses padding. and specify what means “don’t come out right”, what happens and how does it look?

Hi Nowhere,
Thanks for the response. Attached are grabs of what’s drawn for a texture size 512x512 and then scaled to 480x480. Below is the part of the code that loads the image:

struct BitMapFile
int sizeX;
int sizeY;
unsigned char *data;

// Routine to read a bitmap file.
// Works only for uncompressed bmp files of 24-bit color.
BitMapFile *getBMPData(string filename)
BitMapFile *bmp = new BitMapFile;
unsigned int size, offset, headerSize;

// Read input file name.
ifstream infile(filename.c_str(), ios::binary);

// Get the starting point of the image data.
infile.read((char *) &offset, 4);

// Get the header size of the bitmap.
infile.read((char *) &headerSize,4);

// Get width and height values in the bitmap header.
infile.read( (char *) &bmp->sizeX, 4);
infile.read( (char *) &bmp->sizeY, 4);

// Allocate buffer for the image.
size = bmp->sizeX * bmp->sizeY * 24;
bmp->data = new unsigned char[size];

// Read bitmap data.
infile.read((char *) bmp->data , size);

// Reverse color from bgr to rgb.
int temp;
for (int i = 0; i < size; i += 3)
temp = bmp->data[i];
bmp->data[i] = bmp->data[i+2];
bmp->data[i+2] = temp;

return bmp;

BitMapFile *image[1];

// Load the texture.
image[0] = getBMPData("…/Textures/launch.bmp");

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image[0]->sizeX, image[0]->sizeY, 0,
GL_RGB, GL_UNSIGNED_BYTE, image[0]->data);

// Read bitmap data.
infile.read((char *) bmp->data , size);[/QUOTE]

as i said, you have padding issues. the problem is not your texture is NPOT, but it is 4-byte aligned(so at the end of every row it has zero-bytes to make row size multiple of 4). while you reading it as a continuous RGB-array.
you can fix it dirty way, just to ensure i’m correct by placing these calls in your code(before passing image to opengl):

glPixelStorei ( GL_UNPACK_ALIGNMENT,   4 );
glPixelStorei ( GL_UNPACK_ROW_LENGTH,  0 );
glPixelStorei ( GL_UNPACK_SKIP_ROWS,   0 );
glPixelStorei ( GL_UNPACK_SKIP_PIXELS, 0 );

but you shouldn’t use gl_unpack_* for that, you should parse bmp properly. that means you should take padding into account.
Loading Bitmaps, Sections #3 and #4

and also, this code is incorrect:

// Allocate buffer for the image.
size = bmp->sizeX * bmp->sizeY * 24;
bmp->data = new unsigned char[size];

it should be 3 instead of 24. 24 is bit depth, but you store data in bytes. 24 bit = 3 bytes, 1 per each RGB component.

Thanks, Nowhere-01!