glTexImage2D with 16F textures?

According to the texture_float spec:

Accepted by the <internalFormat> parameter of TexImage1D,
TexImage2D, and TexImage3D:
    RGBA32F_ARB                      0x8814
    RGB32F_ARB                       0x8815
    ALPHA32F_ARB                     0x8816
    INTENSITY32F_ARB                 0x8817
    LUMINANCE32F_ARB                 0x8818
    LUMINANCE_ALPHA32F_ARB           0x8819
    RGBA16F_ARB                      0x881A
    RGB16F_ARB                       0x881B
    ALPHA16F_ARB                     0x881C
    INTENSITY16F_ARB                 0x881D
    LUMINANCE16F_ARB                 0x881E
    LUMINANCE_ALPHA16F_ARB           0x881F

That’s great, but what can I use in the <format> parameter of glTexImage2D()? If I have all my pixel data and want to send it to the 16F texture, how the heck do I do that if the <format> parameter doesn’t accept GL_INTENSITY16F or GL_LUMINANCE16F?

These are the values that <format> can accept. How can I turn a grid of 16-bit short values into 16F data on the GPU?:


Apparently you have to use pixel data with classic floats, 32bits wide instead of 16.

format = GL_RGB // or GL_RGBA
type = GL_FLOAT // 32bits floats

That’s hilarious. I have been using GL_UNSIGNED_BYTE since the beginning of time, and it didn’t even occur to me to change it. I don’t even see it anymore. XD

I’m trying this right now. Looks like it is working:
glTexImage2D GL_TEXTURE_2D,0,GL_LUMINANCE16F_ARB,resolution,resolution,0,GL_LUMINANCE,GL_UNSIGNED_SHORT,heightarray

Is there a reason glTexSubImage2D does not work with float textures? What if I want to modify my terrain heightmap data dynamically?

It should work, uploading float32 data to 16f texture, something like this :
glTexImage2D GL_TEXTURE_2D,0,GL_LUMINANCE16F_ARB,resolution,resolution,0,GL_LUMINANCE,GL_FLOAT,heightarray

TexImage2D works fine, but according to the spec and my own attempt, glTexSubImage2D does not. I don’t want to update the whole texture, only part of it.

Maybe this is a solution:

typedef struct{
GLenum iformat;
GLenum format;
GLenum storage;
bool IsCompressed;
int pixelSize;

or simply

I don’t follow.

I mean, why not use the RG16F format instead? 16bpp textures aren’t natively supported anyway, afaik.

GL_RG, GL_RG16F, GL_HALF_FLOAT_ARB are the params to the teximage2d, I think.

  1. I am talking about glTexSubImage2D, not glTexImage2D.

  2. Is this format supported on ATI cards?

  1. On HDxxx cards it is supported, don’t know about previous generations. This format is in ARB_texture_rg extension spec.

Oh duh, glTexSubImage doesn’t even care what the existing texture format is. How embarrassing.