I just did a little OpenGL demo on S3TC implementation, and the effects that DXT1, DXT3, and DXT5 have on the alpha channel.
According to Nvidia’s nvOpenGLspecs-01.pdf file, DXT3 uses 64bits of UNcompressed alpha per 4x4 texel block, and DXT5 uses 64bits of compressed alpha per 4x4 texel block.
How come then my demo shows that DXT5 has much better alpha quality than DXT3?
Or is this a printing error… or is it a bug in my app?
can’t seem to see any.
it’s at www.nutty.org if anyone cares…
That is because in the 64 Bit, there are only 4 Bit Alpha Values stored for each pixel, so you only get 16 different alpha levels.
The compressed format uses full resolution 8 Bit Alpha. It depends on the image itself which format looks better, the uncompressed should look better, when the alpha value is different in all the pixels of 4x4 Block.
The compressed should look better when only some alpha Values are used in 4x4 Block.
hmm, not exactly…
IMHO, compressed alpha is always better.
I just cannot imagine some crazy alpha pattern for as little as 4x4 pixels which needs more then 8 distinctive values.
Uncompressed format allows only 16 levels (4bit) for whole texture.
Compressed alpha allows 8 levels for each 4x4 block. But these levels are created by interpolation between 2 8bit values.
One block can use values 0…7, another - 100…150. It allows to have more precision for the parts with smooth changes in alpha channel.