Compressed Textures and Views

Hey All!

I dug through specs and books today and I cannot figure this out.
What internal format combinations would work for this following code example, if my intention is to have raw storage allocated as a non compressed texture and the texture view interpreting this as BC5 / RGTC ?

GLuint texId;
glGenTextures(1, &texId);
glBindTexture(GL_TEXTURE_3D, texId);
glTexStorage3D(GL_TEXTURE_3D, 1, GL_RGBA32UI, 4, 4, 16);
glBindTexture(GL_TEXTURE_3D, 0);


GLuint viewId;
glGenTextures(1, &viewId);
glTextureView(viewId, GL_TEXTURE_3D, texId, GL_COMPRESSED_RG_RGTC2, 0, 1, 0, 1);

glDeleteTextures(1, &viewId);
glDeleteTextures(1, &texId);

This example failed with INVALID_OPERATION and the GL debug output message says:

Internal formats neither compatible nor identical.

To narrow my question by exclusion:

  • glCompressed* with pixel unpack buffer is not an option.
  • TexStorage cannot have the compressed internal format. This is GL 4.5 and that has been removed.
  • OpenGL spec says this following pair is compatible: GL_RGTC2_RG, GL_COMPRESSED_RG_RGTC2. However GL_RGTC2_RG is not a GL define or defined value in any header or the spec.

It is my understanding according to the compatibility table that the code as written won’t work.
Is this a contradiction? I can not create a compressed texture with immutable storage as it seems. But the view demands that it is both and lists compressed textures.

Is there any way to make this work?

Context is that I am writing to it with CUDA/GL interop, which cannot work with compressed formats. host side calls are also not an option.

Would really appreciate some help here.


I haven’t used Texture Views, but…

GL_RGBA32UI has 16 bytes/texel. Whereas RGTC2 has (on average) 1 byte/texel.

You need to use an RGTC2 internal format as the internal format of the base texture. As the following table lists, that’s one of: GL_COMPRESSED_RG_RGTC2 or GL_COMPRESSED_SIGNED_RG_RGTC2.

The whole idea with changing the internal format between the base texture and the view texture seems primarily to be that they must have the same bytes/texel.

However, for compressed textures, it goes beyond that. They also have to use the same general class of texture compression. For instance DXT1 and and RGTC1. both use (on average) 0.5 bytes/texel, but you can’t allocate a DXT1 base texture and pop a RGTC1 view texture on top of it.

So with what you’re trying above, you definitely should expect a GL_INVALID_OPERATION from glTextureView() for the reason suggested by the driver, which is also noted in the man page:

GL_INVALID_OPERATION is generated if internalformat​ is not compatible with the internal format of origtexture​. 

Where is this coming from?

glTexStorage*() cannot be provided an unsized internal format. But sized internal formats (compressed or uncompressed) like the RGTC2 formats should be fine.

Where do you see that?

What it says is:

Table 8.22: Compatible internal formats for TextureView. Formats in the same
row may be cast to each other.

Have you sanity checked this without using views? Last I checked, RGTC formats aren’t allowed for the TEXTURE_3D target.

Because (GL4.6 sec 8.5): An INVALID_OPERATION error is generated by TexImage3D if internalformat is one of the EAC, ETC2, or RGTC compressed formats and either border is non-zero, or target is not TEXTURE_2D_ARRAY.

thanks for the response arekkusu!

That doesn’t read like that for me though

if internalformat is one of the EAC, ETC2, or RGTC compressed formats and either border is non-zero, or target is not TEXTURE_2D_ARRAY

if ((internalformat in (EAC, ETC2, RGTC)) and (border is non-zero or target is not TEXTURE_2D_ARRAY))

Do I parse the sentence wrongly?

Hey Dark_Photon,

Since OpenGL I believe 4.0+ the sized compressed formats have been removed from glTexStorage*D

Also RGTC == 64bit / channel RGTC2 is 128bit. According to the extension it is the same as DX BC5.

It looks like you’re parsing the English the same way I am. Which means TexImage3D(…RGTC…) = INVALID_OPERATION.

Generally speaking, block compressed formats are not allowed for 1D or 3D textures (only 2D, 2D_ARRAY, CUBE, CUBE_ARRAY, or RECT targets.) The exceptions are BPTC, which explicitly allows 3D, and Nvidia’s VTC extension.

That’s merely a mistake on the glTexStorage*D documentation page. The actual specification specifies their behavior in terms of sequences of the various glTexImage*D calls, which do allow those formats where applicable.

Yeah I figured by now too. Thanks Alfonse.

I’m still trying to figure out a way to write to a (3d compatible) compressed format the compressed bits, with a compute shader. The reason is that I need to write many many blocks into indirect locations that come from an allocator implemented on the GPU. Downloading it and calling a million of copy calls would be too much overhead.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.