Help on using 16-bit textures

I have been beating my head against the wall trying to load a 16-bit texture into OpenGL but with no success and I am beginning to think either I’m incredibly stupid, or my drivers are not cooperating.

I have provided the code that works and that which doesn’t

Here’s the code-

Call glBindTexture(glTexture2D, texName(index))
Call glPixelStorei(GL_UNPACK_ALIGNMENT, 1)
Call glTexParameteri(glTexture2D, tpnTextureWrapS, GL_REPEAT)
Call glTexParameteri(glTexture2D, tpnTextureWrapT, GL_REPEAT)
Call glTexParameteri(glTexture2D, tpnTextureMagFilter, GL_NEAREST)
Call glTexParameteri(glTexture2D, tpnTextureMinFilter, GL_NEAREST)
Call glTexEnvi(tetTextureEnv, tenTextureEnvMode, tepModulate)

' THIS WORKS GREAT! (not what I want)
Call glTexImage2D(glTexture2D, 0&, GL_RGBA,
ogfWidth, ogfHeight, 0&, GL_RGBA, 
GL_UNSIGNED_BYTE, data(0))

' THIS DOESN'T WORK AT ALL
' This is not what I want ideally, but this
' would be acceptable
Call glTexImage2D(glTexture2d,0&,GL_RGBA4,
ogfWidth, ogfHeight, 0&, GL_RGBA4,
GL_UNSIGNED_BYTE, data16(0))

' THIS DOESN'T WORK AT ALL
' This is what I really want, but I assume
' I don't have the extensions.
Call glTexImage2D(glTexture2d,0&,GL_RGBA4,
ogfWidth, ogfHeight, 0&, GL_RGB5_A1,
UNSIGNED_SHORT_5_5_5_1, data5551(0))

The code is in Visual Basic. It works perfectly fine if I use GL_RGBA for internal format, and format with 32 bit textures, however I have to convert the textures from 16-bit to 32-bit then, and as 16-bits is my limit from reading in, it would make little sense to waste all that space.

I have tried changing pixel alignment to Packed alignment for the GL_RGB5_A1 code, but it kept spitting out ‘Invalid Enum’, so I guess the Microsoft OpenGL32.dll doesn’t support it, however even the 16 bit functions, RGBA4 don’t work.

So, can someone tell me if my code is correct? If it is, then I will suspect my drivers. If not, could you please give a sample of what would be the proper way to generate a 16 bit texture from a 16 bit source texture without conversion? or just let me know it can’t be done. :stuck_out_tongue:

I really appreciate any help as I have searched far and wide looking for this. Yes, I have Redbook, and many, many, tutorials, manuals, OpenGLFaq, etc.,. Maybe I’m just blind cause I didn’t see this mentioned.

I almost forgot, in the 16-bit version of the call, I have tried all combinations of PixelTypes, ie. GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT, etc., with no luck. Have tried many, many combinations of various types of values to no avail.

So you bump into the same wall
About a month or two ago I have asked the same question on this board. I get many answers but I can not get it out. I finally surrended and unpack the 16bit color to 24bit(or 32bit) and let the glTexImage2D read it One of the answers I get told me that under OpenGL1.2, there are new definitions that allow you to specify more pixel format, something like GL_UNSIGNED_SHORT_5_6_5, but I am under win98 and can only stick to the OpenGL1.1 it gives me. I am still looking for a better method than this unpack and readback, anyone have a clue?

[This message has been edited by Nil_z (edited 03-24-2001).]

[This message has been edited by Nil_z (edited 03-24-2001).]

Yes…same wall. I’m glad to see I’m not the only one losing my mind.

I almost forgot bump. My system is-

Windows98
ATI Rage Pro (Software only OpenGL as their ICD’s seem to be only good for BSOD)
PII 350MHz, 128Mb ram, 800x600x16bit display
Microsoft OpenGL Libraries

I am writing a model editor and will need this to be functional on more than my own system and if this turns out to be that the drivers are the problem then I suppose I will simply bite the bullet and convert, but that takes time, and seems wasteful.

No, you want a format like GL_RGB/GL_UNSIGNED_SHORT_5_6_5. GL_RGB5 (for instance) is not a format, it is an internal format type.

  • Matt

Yes, what I mean is specify the texture format/type as GL_RGB/GL_UNSIGNED_SHORT_5_6_5(for example). Just leave the internalformat as GL_RGB or GL_RGBA if you do not care about the internal format. Well, I have not try this myself(using OGL1.1 with win98), so correct me if I am wrong
Matt, is there a better way to load 16bit texture into OpenGL other than unpack it to 24(32)bit then call glTexImage2D(under OpenGL1.1)?

[This message has been edited by Nil_z (edited 03-24-2001).]

Call glBindTexture(glTexture2D, texName(index))
Call glPixelStorei(pxsUnpackAlignment, 1)
Call glTexParameteri(glTexture2D, tpnTextureWrapS, GL_REPEAT)
Call glTexParameteri(glTexture2D, tpnTextureWrapT, GL_REPEAT)
Call glTexParameteri(glTexture2D, tpnTextureMagFilter, GL_NEAREST)
Call glTexParameteri(glTexture2D, tpnTextureMinFilter, GL_NEAREST)
Call glTexEnvi(tetTextureEnv, tenTextureEnvMode, GL_REPLACE)

' Create the texture
Call glTexImage2D(glTexture2D, 0&, GL_RGBA, OGFWidth, _
OGFHeight, 0&, GL_UNSIGNED_SHORT_5_6_5, _
GL_UNSIGNED_BYTE, data16(0))

I tried the above using GL_UNSIGNED_SHORT_5_6_5 and the other types, GL_UNSIGNED_SHORT_5_5_5_1, etc.,. I also tried various pixelTypes such as GL_UNSIGNED_BYTE, GL_UNSIGNED_SHORT, with no luck. I’m sorry if I included that poor example in my first post, however, I just wanted to get across that I am desperate and am trying anything at this point.

The above gave me an glInvalidEnum message and that’s about all I can get it to spit out at me until I give it RGBA for both. I have also tried changing the PixelStore to the other possible values thinking maybe that was causing it, but it wasn’t.

Please help me keep what remaining hair I have left .

glTexImage2D(glTexture2D, 0&, GL_RGBA, OGFWidth, _OGFHeight, 0&, GL_UNSIGNED_SHORT_5_6_5, // I think it should be GL_RGB here
GL_UNSIGNED_BYTE, // and GL_UNSIGNED_SHORT_5_6_5 here
data16(0))

You used GL_UNSIGNED_SHORT_5_6_5 as format, that is why you get a invalid enum error. this definition is for type. But I know no more than that. Matt, can you explain it ?

[This message has been edited by Nil_z (edited 03-24-2001).]

GL_UNSIGNED_SHORT_5_6_5 - it’s from OpenGL 1.2

You should use GL_EXT_packed_pixels .

glTexImage2D( GL_TEXTURE_2D, nLOD, GL_RGBA, nWidth, nHeight, 0, GL_RGBA, UNSIGNED_SHORT_5_5_5_1_EXT, pData );

btw, this extension is kind of really f***ed up and big-endian only (because it was created for SGI hardware), so probably you should convert “normal” 1:5:5:5 to 5:5:5:1
(Geforce drivers support OpenGL 1.2, so 1:5:5:5 can be used directly with GL_UNSIGNED_SHORT_1_5_5_5_REV)

Serge- I typed it in just as you have it, and I am still getting invalidEnum as my error.

I read the document you referred to. It seems to leave things a bit foggy on which pixel storage method I should use, so I tried packed pixel alignment as well as unpacked pixel alignment.

Since the return code keeps coming back invalidEnum, I would suppose this means that the OpenGL driver doesn’t understand what I sent it, would this be a good assumption? And if so, is it because it’s Microsoft’s OpenGL driver? Is there a version that would understand this?

This discussion seems useless now.
Before you can use some extension, you must check if you have it.

As I remember, MS software-only OpenGL implementation does not support GL_EXT_packed_pixels extension.

I’m not sure, but I think that

Call glTexImage2D (glTexture2d,0&,GL_RGBA4,ogfWidth, ogfHeight, 0&, GL_RGBA,GL_UNSIGNED_BYTE, data16(0))

is what you want…

geez.

I am simply trying to find out if 16-bit will or will not work as a format and if so, how can it be done? If it is only available in v1.2 then so be it, I’ll convert all of the 16-bit images to 32-bit.

If it doesn’t work on ALL systems, then I will not bother checking as I was trying to make my texture reads faster by not having to convert them.

Fiend- Thanks, however that would allow for 16-bit as internal format, but I am trying to use a format of 16-bit. In other words the texture data.

[This message has been edited by Sheepie (edited 03-24-2001).]

on my page (in profile) theres an example of the packed pixels extension. also of interest might be the internal formats demo

I have a question about internal format. I remember in Quake3, you can select the texture format between 16bit/32bit. I think it is for the internal format, since they use jpg and tga for texture, which is mostly 24bit or 32bit. The question is, is it possible to select the texture’s internal format if you do not have OGL1.1 support? I can not figure out a piece of hardware that do NOT support OGL1.1, I am just wondering, it that possible?

Originally posted by Nil_z:
I can not figure out a piece of hardware that do NOT support OGL1.1, I am just wondering, it that possible?

It was - 5 years ago…