Padded Textures

Hey everyone! I am currently in the process of switching our OGL code to pad surfaces passed to it from SDL to power of 2 and then using glTexSubImage2D() to send the pixel data.

The issue I am having is when I draw these textures and getting the coordinates setup properly. for example if I have an image that is 800x600 it obviously gets padded to 1024x1024.

This is what I get (all the drawing functions are similar to the below example code).


if(image)
{
	glColor4ub(255, 255, 255, 255);

	glEnable(GL_TEXTURE_2D);
	glBindTexture(GL_TEXTURE_2D, getTextureId(image));

	glBegin(GL_QUADS);
		glTexCoord2f(0.0, 0.0); glVertex2i(x, y);
		glTexCoord2f(0.0, 1.0); glVertex2i(x, y + image->getHeight());
		glTexCoord2f(1.0, 1.0); glVertex2i(x + image->getWidth(), y + image->getHeight());
		glTexCoord2f(1.0, 0.0); glVertex2i(x + image->getWidth(), y);
	glEnd();

	glDisable(GL_TEXTURE_2D);

	return true;
}

The image probably doesn't help but if someone can point me in the right direction it would be helpful.  I believe I need to size my quads and texture coords more appropriately to reflect the original image size and 'exclude' the padded section. Is this correct?

[/b]

I am currently in the process of switching our OGL code to pad surfaces passed to it from SDL to power of 2 and then using glTexSubImage2D() to send the pixel data.

Dodging the question of what you’re doing wrong (your problem is likely in the code that does the padding, not the code that does the rendering. You can test this by just loading an image from a file and using it directly without padding), is there some particular reason you feel the need to do the padding at all?

Any hardware capable of OpenGL 2.1 or above, which is any hardware made in the last 5+ years, is also capable of using non-power-of-two textures (NPOTs). NPOTs aren’t something you should strive to use whenever possible, but you shouldn’t pad images just because you were given an NPOT image.

Well we have had issues with ATi and Intel machines not displaying right at all. This is when we didn’t care about NPOT, so we are trying to find a solution to support the broadest set of hardware. (It is just a 2D game).

Sorry I didn’t mean to ask you to tell me what is wrong I am just looking to see if I am in the right direction at all. I know without padding it works on my machine because it can deal with NPOT (Macbook Pro).

Here is the code that generates our textures:


GLuint OGL_Renderer::generateTexture(SDL_Surface *src)
{
	GLuint texId = 0;

	glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
	glGenTextures(1, &texId);
	glBindTexture(GL_TEXTURE_2D, texId);

	SDL_PixelFormat *format = src->format;

	// Detect which order the pixel data is in to properly feed OGL.
	GLint nColors = format->BytesPerPixel;

	/**
	 * 	odo	When compiling in nightmare mode with Visual Studio, it indicated that \c textureFormat could be potentially
	 *			uninitialized should nColors be anything other than '3' or '4' (24 bit and 32 bit color modes, respectively).
	 *			The only other expected values would be '2' (16 bit color mode) or '1' (8 bit color mode). While in practice
	 *			neither of these cases is likely to show up, it's extremely important that textureFormat is initialized to a
	 *			good default value that will work for generally all modes or one that will at least not cause a crash.
	 */
	GLenum textureFormat;
	if(nColors == 4)
	{
		if(format->Rmask == 0x000000ff)
			textureFormat = GL_RGBA;
		else
			textureFormat = GL_BGRA;
	}
	else if(nColors == 3)     // no alpha channel
	{
		if(format->Rmask == 0x000000ff)
			textureFormat = GL_RGB;
		else
			textureFormat = GL_BGR;
	}
	else
		Logger::log << Logger::warning() << "Image is not truecolor." << std::endl;

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, TEXTURE_FILTER);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, TEXTURE_FILTER);

	glTexImage2D(GL_TEXTURE_2D, 0, nColors, nextPower2(src->w), nextPower2(src->h), 0, textureFormat, GL_UNSIGNED_BYTE, NULL);
	glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, src->w, src->h, textureFormat, GL_UNSIGNED_BYTE, src->pixels);

	return texId;
}

It looks to me as if all you need is a call to “glPixelStorei (GL_UNPACK_ROW_LENGTH”. Your unpack alignment should probably be changed to 1 as well.

Hmm, this is interesting, the POT requirement practically calls for texture atlases.

Well what I have done now is the following:


	unsigned char* pixels = new unsigned char[(nextPower2(src->w) * nextPower2(src->h)) * nColors];
	for (int i = 0; i < (nextPower2(src->w) * nextPower2(src->h) * nColors); i++)
		pixels[i] = 0;

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, TEXTURE_FILTER);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, TEXTURE_FILTER);

	
	glTexImage2D(GL_TEXTURE_2D, 0, nColors, nextPower2(src->w), nextPower2(src->h), 0, textureFormat, GL_UNSIGNED_BYTE, pixels);
	glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, src->w, src->h, textureFormat, GL_UNSIGNED_BYTE, src->pixels);

I am filling a padded texture with blank data and then updating only the section that contains the original image. And in the vertex coordinates I am extending them to the full padded size.

This seems to be working just fine however it is really slow, my guess is the array loop of zeroed pixels.