I try to create a simple OpenGL 3.0 application. I have an ATI hd3870 card and everything goes ok, but, I get a strange error on nVidia cards with lattest drivers 182.50. After some tests I locate a place there glGetError() returns me a 1280 error (INVALID_ENUM), this occurs right after glEnable(GL_TEXTURE_2D) in a very beginng of a program. The question is - what is mean?

Ah, and problem is - on nVidia card my object in scene (just cube) dosen’t have any texture. But on ATI works perfect, withou any errors.

Somne new facts from frontline :slight_smile:

I comment out glEnable(GL_TEXTURE_2D), as one man say me it is not required. And I found second place there error 1280 is fired on nVidia, it is line:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, buffer);

And just for test I remove OpenGL 3.0 forward context decloration code and edit my shaders for GLSL 1.2 and guess what? There is no any errors and application runs fine on both ATI and nVidia. x_x

And I want to ask, what is wrong with this glTexImage2D in respect to OpenGL 3.0?

Maybe it is in the way you are using it.
Could you post the code relative to your texture setup?

Hm, just change GL_BGR to GL_RGB in glTexImage2D and application with OpenGL 3.0 context now run normal on nVidia too. But GL_BGR works on ATI. Strange issue :frowning:

Yes, the old code is (I remove glGetError() checks from it):

GLuint create_texture_from_tga(const char *file_name)
	struct tga_header header;
	int32_t           size, bpp, pixels, i, j;
	uint8_t           *buffer, rgba[4], temp, pack;
	FILE              *image;
	GLuint            texture;

	image = fopen(file_name, "rb");
	if (NULL == image)
		LOG_ERROR("Cant open file %s
", file_name);
		return 0;

	if (fread(&header, 1, sizeof(header), image) != sizeof(header))
		LOG_ERROR("Cant read TGA header from file %s
", file_name);
		return 0;

	if (header.bitperpel!=24 && header.bitperpel!=32 && header.datatype!=2 && header.datatype!=10)
		LOG_ERROR("Wrong TGA format %s
", file_name);
		return 0;

	if (header.idlength)
		fseek(image, header.idlength, SEEK_CUR);

	size   = (int32_t)header.width * (int32_t)header.height;
	bpp    = header.bitperpel / 8;
	buffer = (uint8_t*)malloc(size * bpp);

	if (header.datatype == 2)
		fread(buffer, size, bpp, image);
	} else if (header.datatype == 10)
		for (i=0, j=0; i<size;)
			fread(&temp, 1, 1, image);
			pack   = temp & 128;
			pixels = (int32_t)(temp & 127);
			if (pack)
				fread(rgba, 1, bpp, image);
				while (pixels-- >= 0)
					memcpy(buffer + j, rgba, bpp);
					j += bpp;
			} else
				fread(buffer + j, pixels + 1, bpp, image);
				j += (pixels + 1) * bpp;
				i += pixels + 1;


	glGenTextures(1, &texture);
	glBindTexture(GL_TEXTURE_2D, texture);



	glTexImage2D(GL_TEXTURE_2D, 0, bpp == 3 ? GL_RGB8 : GL_RGBA8, header.width,
		header.height, 0, bpp == 3 ? GL_BGR : GL_BGRA, GL_UNSIGNED_BYTE, buffer);
	// as I say eraly, I change GL_BGR to GL_RGB and this resolve the problem


	return texture;

Any ideas?

I looked into your code and I have seen nothing wrong.
But there is something I don’t understand, is the loaded data pixel format, BGR or RGB ?? Because, switching from one to another you should seen weird colors, don’t you?

The early nVidia GL 3.0 drivers are a bit buggy, probably not your fault.


Yes, I do. I add swapping from BGR to RGB to the image load code, and this works for me.

I think so.

Anyway, thanks for replies! :slight_smile:

Yes, I do. I add swapping from BGR to RGB to the image load code, and this works for me.

Thanks, I understand better, it must be a driver bug as Jan said.