I am trying to reduce the bandwidth consumed by textures updates on an exported application (using OpenGL 1.4 and X11 through a network).
I found this excellent texture compression feature (just declaring a compressed internal format), but my question is: where is the compression going to take place? At the client side (before being sent) or at the server side (on the actual GPU after travelling through the network)? Is the texture compression going to help reducing the traffic on the network?
Another thing that puzzles me is: is it worth putting the texture in a list in order to save bandwidth?
If you choose a compressed internal format,
glTexImage2D expects uncompressed data (and will transmit that date via GLX); the implementation (within the X server) will perform compression. So this reduces storage but not bandwidth.
glCompressedTexImage2D accepts (and transmits) compressed data. You need to determine that the implementation accepts data in the specific format. OpenGL 1.4 itself doesn’t require an implementation to support any compressed formats; these depend upon extensions.
glTexImage* call in a display list only helps if you’re going to be making the same call with the same data more than once. But in that case, you’re better off just creating the texture once and keeping it around.
Thanks, it is now clear who does what.
Now this raises the next question: how do I compress my buffer on the CPU before transferring it (and that in a way that the server understands)?
glGetString(GL_EXTENSIONS) and scan the extension list for an extension which defines a compressed format, e.g.
GL_EXT_texture_compression_s3tc. Then convert the data to that format and pass it to
Alternatively, you can send it uncompressed to the server using
glTexImage2D with a generic compressed internal format (e.g.
GL_COMPRESSED_RGBA), query the actual compressed internal format with
glGetTexLevelParameter(GL_TEXTURE_INTERNAL_FORMAT) and the compressed size with
glGetTexLevelParameter(GL_TEXTURE_COMPRESSED_IMAGE_SIZE), then read the compressed data back with
glGetCompressedTexImage. You can then use that data for future
The latter option avoids the need to know anything about specific formats, but there’s no guarantee that the compressed format used will be compatible with other systems or future versions.
Yes, I think in my case the second option will not improve the situation, simply because once the textures are loaded I only call
glBindTexture with the texture identifier until the texture is no longer useful and needs to be replaced. I assume GLX will be handling the identifier (or the list) through the network then, not the whole texture.
So the first option seems to be the way to go for me. It will put more load on the CPU, but that is not where my bottleneck is right now (high bandwidth restrictions are imposed).
I found Nvidia cards normally support ST3C-DXT1 (6:1 ratio and 1-bit alpha) and ST3C-DXT5 (4:1 ratio and 3-bit alpha) compression formats, which I tested and give very good results. Now I am looking for a library that is able to do it for me. Not simple. I found this on GitHub, which is written in C++ and implements DXT5 for RGBA formats. I am coding in Ada, so I’ll have to figure out how to make a binding… but this would reduce the bandwidth with factor 4, which is probably the lowest I can go while keeping a decent alpha and resolution.