glTexCoord2f texture mapping weird

Hi all.

Right this is the scenario. Ive created a 2D square using GL_POLYGON to it takes up the entire screen (by setting the view point so it does). Ive then successfully mapped a BMP onto this polygon so the bmp takes up the entire screen (kind of like a wallpaper). This works correctly and this is the code ive used:


	Texture t = Texture();
	int i = t.LoadBitmapW("c:/board.bmp");
	glBindTexture(GL_TEXTURE_2D, i);


	i = t.LoadBitmapW("c:/img.bmp");
	glBindTexture(GL_TEXTURE_2D, i);

		glTexCoord2f(0.0,0.0);glVertex3f(0.77, 0.628, 0.1);
		glTexCoord2f(1.0,0.0);glVertex3f(0.92, 0.628,0.1);
		glTexCoord2f(1.0,1.0);glVertex3f(0.92, 0.885, 0.1);
		glTexCoord2f(0.0,1.0);glVertex3f(0.77, 0.885,0.1);



The code should be easy enough to understand. A big polygon is created and the file “c:/board.jpg” is mapped onto it, this works fine. Then i create a smaller polygon and place it near the top right hand corner the the viewpoint and map the bmp “c:/card.bmp” onto it. However this image isnt mapped correctly onto the polygon, it is stretched to the corners. I believe I have my rotation correct of the polygons (counter clockwise) and the glTexCoord2f is also constructed counter clockwise, however this second polygons mapping is wrong unlike the first polygon. Can anyone see what I’ve done wrong.

Also I have another question for you. Im very new to OpenGL and this is my first program. Would you say that the current way of made my program is untidy, unclean and unprofessional? If so then how can I improve it?

Thanks guys :slight_smile:

Texturing works this way: each vertex is supplied with a texture coordinate (like you do it). When the polygon is rendered, the texture is mapped onto the polygon using this coordinates for edges (vertices) and interpolating them inbetween. Consider following: the texture is like a thin streching film, and your polygon is defined through its vertices. OpenGL fixes the points of the texture given by the texture coordinates on the corresponding vertices, so that the image is streched/shrinken (or partly invisible) according to them. So, you do nothing wrong, but the texture coordinates you specify are actually the edges of the texture. With other words, you map the whole texture to smaller polygon. So, the part of the texture rendered does not depend on the size of the polygon, but on the texture coordinates you send in. You can consider looking into the OpenGL specification, there section about texturing there is written pretty well and should explain it.

Second: there is no need to call glPush/glPop. This functions are used to save the transformation matrix before applying changes like glTraslate/glRotate etc. This way you can build a hierarchy of objects:

for i = 0 to n:

In your case you don’t modify the matrix, so you can delete them.

Third: it is not a good idea to load the textures inside the rendering loop as this is a very slow process. You should create a loading step (I bet you have something like the setup function where you load the matrices etc) and place the texture loading there. When rendering, just use the saved texture ID to bind it (with glBindTexture) - the image is automatically saved by OpenGL when you associate the texture with an ID.

Hope this helps…

Hi thanks for the reply :slight_smile:

I must admit I am slightly confused because I am relatively new to OpenGL and its concepts. Are you saying that the problem Im having isnt to do with the theory behind my programming but the actual size of the second polygon?

Thanks for the tip about taking the loading out of the main rendering loop. :slight_smile:

Ok, maybe I undesrtood your question wrong. What do you mean by “streched to teh corners”? Do you exept only part of the texture to be visible? Your code should procuse teh image when the texture is completely mapped onto the rectangle. Can you post a screenshot?

Yeah sure:

This first image is the bmp thats being mapped to the first polygon

This second image is the bmp thats being mapped to the second polygon


This final image is a screenshot of my opengl program.


As you can see the second image isnt mapping correctly as it comes out diagonally.

Thanks for your help :slight_smile:

Note, im not concerned with the change in font colour from red to yellow on the second image.

The second image is uploaded incorrectly to the OGL because of its odd size. Try to use glPixelStore( GL_UNPACK_ALIGNMENT, 1)

thanks. Where would I use that in the program? :slight_smile:

Also should it be glPixelStoref not glPixelStore? Thanks :slight_smile:

Somewhere in the initialization part before you create the textures.

Use the glPixelStorei since that is most appropriate for integer values like this.

One more comment to the texture dimensions you are usings. Older cards (including the GeForce FX famility) do not support textures with dimensions that are not power of two trough the GL_TEXTURE_2D target. EDIT: If they expose the OGL 2.0, they have to accept them however it is likely that the rendering will run in sw emulation. To use such textures on such cards you have to use the (ARB|EXT|NV)_texture_rectangle extension.

Hi thanks again :slight_smile:

I threw glPixelStorei( GL_UNPACK_ALIGNMENT, 1); into pretty much all sections of my code with no luck :frowning:

With your second comment do you mean that I may need to reedit my picture so instead of having dimension of 99x164 i may need to have it as 128x164? Thanks again

Hm, the first texture is also non-power-of-two, so it shouldn’t be displayed either.
Try resizing the images to power-of-two dimensions (for example, first on eto 1024x1024 or 512x512, 1024x512 will do it to, and the other one likewise to 128x256, for example) and look how it’ll work.
This may be the problem of the texture loader… Where do you have it from?

p.s. Milk, sorry for my first reply, I understood your question wrongly…

Hi, when you say where do i get the texture loader from, do you mean where did i get the code to load the bitmap into raw data so it can be mapped? If so then i got it from this website here:


The code im using is this

int Texture::LoadBitmap(char *filename) 

   unsigned char *l_texture;
   int i, j=0;
   FILE *l_file;
   BITMAPFILEHEADER fileheader; 

   if( (l_file = fopen(filename, "rb"))==NULL) return (-1);
   fread(&fileheader, sizeof(fileheader), 1, l_file);
   fseek(l_file, sizeof(fileheader), SEEK_SET);
   fread(&infoheader, sizeof(infoheader), 1, l_file);

   l_texture = (BYTE *) malloc(infoheader.biWidth * infoheader.biHeight * 4);
   memset(l_texture, 0, infoheader.biWidth * infoheader.biHeight * 4);

   for (i=0; i < infoheader.biWidth*infoheader.biHeight; i++)
      fread(&rgb, sizeof(rgb), 1, l_file); 

      l_texture[j+0] = rgb.rgbtRed; // Red component
      l_texture[j+1] = rgb.rgbtRed; // Green component
      l_texture[j+2] = rgb.rgbtBlue; // Blue component
      l_texture[j+3] = 255; // Alpha value
      j += 4; // Go to the next position
   glBindTexture(GL_TEXTURE_2D, num_texture);
   glTexImage2D(GL_TEXTURE_2D, 0, 4, infoheader.biWidth, infoheader.biHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, l_texture);
   gluBuild2DMipmaps(GL_TEXTURE_2D, 4, infoheader.biWidth, infoheader.biHeight, GL_RGBA, GL_UNSIGNED_BYTE, l_texture);
   return num_texture;

I am currently in the process or resizing the images. However I cannot see that as being a problem as the first image wasnt a power of 2 yet it seemed to load fine

If I remember correctly, the BMP file requires that all lines inside the file are aligned to multiple of 4 bytes. This works for the 800 pixel wide image (8003 = 2400) and this breaks for the 99 pixel wide image (993=297) in which the file likely contains some zero bytes at end of the line to archive that alignment.

The glPixelStorei needs to be called only once, that state is remembered by OGL and used until you change it again.

Originally posted by Milk:

With your second comment do you mean that I may need to reedit my picture so instead of having dimension of 99x164 i may need to have it as 128x164? Thanks again

If you wish to support the old cards (older than GF6800 for nVidia) you have three ways to do that.

  1. You can use the extensions I mentioned, this allows you to use such textures (with dimensions that are not power of two) with some limitations.

  2. You can resize the texture to nearest higher power of two dimentions.

  3. You can create bigger texture with power of two dimensions, fill only part of it and update the texture coordinates appropriatelly.

Ok will this work then. If i say have both images of 800x600 at this is a multiple of 4 bytes and then transforming the second polygon so it is the size that I want? Thanks :slight_smile:

I have found a description mentioning the alignment here at the beginning of chapter “Image Data and Compression”

you’re right it works!! Thankyou both so very very much.

This is a really lame question now but what pixel values (like 800x600) can I use so it will always be a multiple of 4 bytes? Thanks!!!

Originally posted by Milk:
Ok will this work then. If i say have both images of 800x600 at this is a multiple of 4 bytes and then transforming the second polygon so it is the size that I want? Thanks :slight_smile:
If you have both images of the 800x600 size then it will work on cards that support non power of two textures. You can also fix your bmp loader to correctly align the lines inside the bmp file so it will load the 99pixel wide file correctly.

Originally posted by Milk:
This is a really lame question now but what pixel values (like 800x600) can I use so it will always be a multiple of 4 bytes? Thanks!!!
For your RGB images it is width for which ( width * 3 ) is multiple of four. Height of the image is not important for this purpose.

It would be better however if you fix your loader instead. That way you can load bmp with any size.