Texture mapping segfault :(

i have a bmp (it’s a strip) that i want to break up into smaller 32x32 chunks and store each chunk in an array space. for some reason i get a segfault on my glTexImage2D() calls. im using x 4.2, sdl, and gcc 3.1. any ideas? here’s my function:

bool LoadTileSet(unsigned int setid)
{
// NOTE: setid is unused at the moment, use when multiple sets exist.
// eventually gonna use a tile set class or something like that.

SDL_Surface *TextureImage = NULL;
SDL_Surface *TextureImageTemp = NULL;
SDL_Rect src, dest;

TextureImage = SDL_LoadBMP(“tiles.bmp”);
if (TextureImage == NULL)
return false;

if (background_tiles)
glDeleteTextures(MAX_TILES, background_tiles);

glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(MAX_TILES, &background_tiles[0]);

for (int i = 0; i < MAX_TILES; i++) {
src.x = i * 32;
src.y = 0;
src.w = 32;
src.h = TextureImage->h;

dest.x = 0;
dest.y = 0;
dest.w = 32;
dest.h = TextureImage-&gt;h;

SDL_BlitSurface(TextureImage, &src, TextureImageTemp, &dest);

glBindTexture(GL_TEXTURE_2D, background_tiles[i]);

glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER, GL_LINEAR);

// ***bombs here: ***
if (TextureImageTemp-&gt;format-&gt;BytesPerPixel == 3)
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0,
	   GL_RGB, GL_UNSIGNED_BYTE, TextureImageTemp-&gt;pixels);
else
  glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 32, 32, 0,
	   GL_RGBA, GL_UNSIGNED_BYTE, TextureImageTemp-&gt;pixels); 

}

glEnable(GL_TEXTURE_2D);

SDL_FreeSurface(TextureImage);
SDL_FreeSurface(TextureImageTemp);

return true;
}

Uh… it bombs right there because TextureImageTemp is NULL.

So, like, make it non-NULL, or something.

Hi,

You don’t really need to blit your image data to a temp buffer. You can just setup the OpenGL pixel transfer pipeline to read the correct sub-image.

– Niels

thank you. how do i change the pipeline so that i don’t need the temp variable?

Jeremy

Hi,

Sorry for the late reply… Take a look at glPixelStorei … Especially: GL_UNPACK_ROW_LENGTH, GL_UNPACK_IMAGE_HEIGHT, GL_UNPACK_SKIP_PIXELS, and GL_UNPACK_SKIP_ROWS. You code should look something like this:

glPixelStorei (GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei (GL_UNPACK_LSB_FIRST, GL_FALSE);
glPixelStorei (GL_UNPACK_ALIGNMENT, 1);
glPixelStorei (GL_UNPACK_ROW_LENGTH, TextureImage->w);
glPixelStorei (GL_UNPACK_IMAGE_HEIGHT, TextureImage->h);
glPixelStorei (GL_UNPACK_SKIP_PIXELS, i*32);
glPixelStorei (GL_UNPACK_SKIP_ROWS, 0);

glBindTexture (GL_TEXTURE_2D, background_tiles[i]);

glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0, GL_RGB, GL_UNSIGNED_BYTE, TextureImage->pixels);

You might want to store/restore the pixel unpack before and after…

– Niels

okay, now i’m getting somewhere. however it seems that my array is being filled by the first texture only. any idea’s why this is? my loading loop is as follows:

for (int i = 0; i < MAX_TILES; ++i) {

glPixelStorei(GL_UNPACK_SWAP_BYTES, GL_FALSE);
glPixelStorei(GL_UNPACK_LSB_FIRST, GL_FALSE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glPixelStorei(GL_UNPACK_ROW_LENGTH, TextureImage-&gt;w);
glPixelStorei(GL_UNPACK_IMAGE_HEIGHT, TextureImage-&gt;h);
glPixelStorei(GL_UNPACK_SKIP_PIXELS, i * 32);
glPixelStorei(GL_UNPACK_SKIP_ROWS, 0);

glBindTexture(GL_TEXTURE_2D, background_tiles[i]);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 32, 32, 0,
	 GL_RGB, GL_UNSIGNED_BYTE, TextureImage-&gt;pixels);

}

I see that you skip i * 32 pixels. If I’m not mistaken, the bitmaps are 32x32, and you should therefore skip i3232. You could also increment the bitmap pointer directly:

glTexImage2D(…TextureImage->pixels + i3232*3);

Originally posted by coelurus:
[b]I see that you skip i * 32 pixels. If I’m not mistaken, the bitmaps are 32x32, and you should therefore skip i3232. You could also increment the bitmap pointer directly:

glTexImage2D(…TextureImage->pixels + i3232*3);[/b]

I skip i*32 pixels in the x direction (GL_UNPACK_SKIP_PIXELS), and 0 pixels in the y direction (GL_UNPACK_SKIP_ROWS).

– Niels

neither of these work. i don’t understand why. i did find however that it i changeed the unpack pixel rows to i* 32 * 2 that all of the array elements fill up with the second texture. but this does not really help me. i don’t see what the problem is at all. maybe i is not being incremented properly or something.

Jeremy

Hmm, don’t really know how the skips work…
How about loading the entire texture-strip at once, then break it down when rendering (since the texture-dimensions will still be 2^n). This also keeps the tiles in one texture, and you skip a lot of binds.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.