But in the past, I was criticized for to have reedit multiple times olds posts
My point is that you should generally say a full a complete thought the first time, rather than coming back every 30 minutes to an hour with updates. Organize your thoughts before posting.
This “new” compressed format can certainly to be used in the futur as an new internal OpenGL texture format that can be stored in a compressed form (cf. quantized/reordered/RLEed and Zlibed)
No, it can’t. For the reasons I outlined before (which you have continued to ignore), it makes for a terrible texture format. The very second you start talking about zlib, quantization, RLE, or any such thing, you kill all texturing performance.
Texture formats are optimized for reading. Because that’s what the user does with textures; they read them. That is the most common operation for textures. And that is why formats like S3TC are used and formats like JPEG 2000 are not: because they are not optimized for reading.
into texture memory while it was not binded (for to economize the on-card video memory) and only decompressed when needed (cf. binded)
So every time I bind this texture, whether I’m going to render with it or just change some texture parameters, it’s going to cause the texture to be decompressed? I don’t know about you, but to me, that sounds like a performance problem. Even if you restrict this to just using the texture, I’d still rather just have the driver do the standard memory management.
It doesn’t save any GPU room compared to evicting unused textures. If used textures are going to be fully decompressed in GPU memory, then what’s the point of having them be compressed in the first place?
Let’s say you have a 1024x1024 texture. With RGBA8 (the alpha component is irrelevant, but needs to be present for alignment reasons), that comes to 4MB. With S3TC, that reduces down to 0.5MB. If you use something like JPEG 2000, you may be able to reduce this to 0.04MB.
Now, when you have this texture on the GPU, it takes up 0.04MB in the JPEG 2000 case, and 0.5MB in the S3TC case. However, when you have to use this texture, it takes up 4.04MB in the JPEG 2000 case, and 0.5MB in the S3TC case. This is because texture units cannot directly access and decompress JPEG 2000, so any such textures must be decompressed into GPU memory before the texture units can access them.
0.5MB is smaller than 4.04MB. S3TC wins. I can pack more S3TC textures into video memory than JPEG 2000.
Also, let’s consider decompression performance.
You keep saying that hardware has JPEG decompression built into it. And it does (though that document says nothing about JPEG 2000, which is a very different format from regular JPEG). But how many 2048x2048 images do you think they can decompress in 1/60th of a second?
The purpose of built-in JPEG decompression in mobile GPUs is to alleviate the CPU burden when viewing images over the web. In those circumstances, you don’t need instant results. You can wait 0.2 seconds for all 20 of the website’s images to be decompressed.
You cannot wait 0.2 seconds for 20 images to be decompressed in a real-time application. That murders performance. And if the textures are always uncompressed, then you’re not saving any video memory. Indeed, you’re losing video memory by having both the uncompressed and compressed forms around.
You are not the first person to think that their wavelet/JPEG/etc based format can beat S3TC, nor will you be the last. But thus far, all of them run afoul of the simple and obvious fact that S3TC is a texture format, designed for the specific purpose of being quickly addressed and decompressed by texture units. And the alternatives are not.