RGB_S3TC_DXT1 vs. RGBA_S3TC_DXT1

GL_COMPRESSED_RGB_S3TC_DXT1_EXT vs GL_COMPRESSED_RGBA_S3TC_DXT1_EXT

I’ve been taking a look at several .dds loading modules and everywhere there is one of those formats specified, but in no case both of them. Since images saved in DXT1 format with or without alpha information take up the same amount of space, I was wondering whether loading image with no alpha as GL_COMPRESSED_RGBA_S3TC_DXT1_EXT does waste some color information? If there is no difference between the two, why is this distincion then?

Even if the storage format is indeed the same, the use is different, is that not enough to give each a name of its own ?

Originally posted by therealremi:
I was wondering whether loading image with no alpha as GL_COMPRESSED_RGBA_S3TC_DXT1_EXT does waste some color information?
Depending on the encoder and image there can be blocks within the DXT1 compressed image that might have higher quality when RGB variant is used.

In DXT1 format each 4x4 block is encoded as two 16bit colors and each texel inside that block has assigned value from <0,3> range which encodes color of the texel. For each block the encoder selects one from two sets of mappings between the value and texel color.

First set is: COLOR0, COLOR1, (2COLOR0+COLOR1)/3, (COLOR0+2COLOR1)/3.
Second set is: COLOR0, COLOR1, (COLOR0+COLOR1)/2, BLACK.

This is the same for both RGB and RGBA DXT1. For the RGBA, all transparent texels are encoded using set and value corresponding to BLACK which is thus unavailable for black opaque texels.

If the opaque image contains blocks with some black texels, the RGB encoder might use the second set which allows to use two colors and explicit black to represent the black exactly while it still has two colors available. In the same situation the RGBA encoder might waste one color to represent the black texels (unless it decides that the black is not important in this block) which might result in reduced quality of such block when compared with the RGB variant.