using a texture with GL_R3_G3_B2

Hi to everyone. Although I’ve been around these forums a year or two, this is the first time I think that I’m posting in the advanced forum.
Anyway, what I’d like to ask fellow programmers is if there will be any gains, by using a GL_R3_G3_B2 format for my textures.
I will be using rather large textures (2048x2048) to display maps, so what we’re interested in is spatial resolution, not color resolution. The application should run on commodity graphics card such as GeForce and Radeon (maybe a bit old) so we’re keen on saving memory.
Can, these cards, represent an image in this format or they translate it internally to our known and beloved GL_RGB?
From what I’ve read in the red book OpenGL does not guarantee that such a representation will take place.

I seriously doubt any modern hardware supports the RGB332 format. They’d probably just use RGB565.

Uploading as 332 looks like 332 on my Radeon9600, but I don’t know if that’s because of the internal format or if the driver has just chopped off bits before storing it as 555.

Well, after some more searching and some “higher help” I think I’ll be using texture compression. Thanks to everybody :wink:
Just for the fun of it though, arekkusu, how did you manage to use a 332 texture?
I tried this:

// init callback
unsigned char Tex = 0x00;
cout << sizeof(Tex) << endl;
glGenTextures(1, &texID);
glBindTexture(GL_TEXTURE_2D, texID);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R3_G3_B2, 1, 1, 0, GL_RGB, GL_UNSIGNED_BYTE, &Tex);

// display
glColor3ub(255, 255, 255);
glBindTexture(GL_TEXTURE_2D, texID);
glColor3ub(255, 255, 255);
	glTexCoord2i(0, 0);
	glVertex2i(-1, -1);
	glTexCoord2i(1, 0);
	glVertex2i(1, -1);
	glTexCoord2i(1, 1);
	glVertex2i(1, 1);
	glTexCoord2i(0, 1);
	glVertex2i(-1, 1);

but instead of a black texture (or white, supposing that bits are inverted) I get a cyan which is a R 0, G 255, B 255. Could you post some code?

Originally posted by moucard:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R3_G3_B2, 1, 1, 0, GL_RGB, GL_UNSIGNED_BYTE, &Tex);
GL_RGB and GL_UNSIGNED_BYTE will look for a 24bit source. You need to use GL_UNSIGNED_BYTE_3_3_2.

Grr, once again I was troubled by the outdated Microsoft gl.h. I’ll make the switch, one of coming weeks, I’ll make the switch…
Thanks for the help everybody.

I have some software that supports 332 textures and have noticed absolutely no benefit in directly using this (in terms of performance) on various generations of NVIDIA and ATI consumer hardware.

I now convert them to RGB565. I think the drivers just do the same thing. I prefer doing it myself though because I like to believe that I’ve become brutally efficient at it.