using compressed textures

i’m programming my own game engine. i’d like to ask how fast is runtime processing (decompression) of compressed textures on gpu in opposite textures without GL_COMPRESSED_RGB_ARB. i’m thinking if i should compress all textures (if extension is supported) or not. is it better compress only some textures (then how to make decision which one?, compress only big?)

any advice please?

As far as I’m aware, compression speeds up disk-to-memory and memory-to-videocard transfers by as much as 6 times (depending on compression algo used), and also speeds up texture sampling because more texture fits in the sampling cache. So, it’s not a question of not using textures because of some kind of performance overhead, rather the opposite, you get better performance AND you get more storage. However, you also get compression artefacts, which is the only downside.

When i start my engine with compression enabled it takes a few seconds longer to load, because the gpu/driver has to compress the textures first, so load-times get longer.

On the other hand, bandwidth inside the gpu is lowered and decompression is very fast. With compressed textures (i use compression for everything) i usually get significantly higher framerates. In the worst case, it is the same speed, as without compression. That would be the case, if you are not limited by texture-accesses/bandwidth.

Jan.

It’s a bad idea to let the driver do the compression, they’re renowned for doing a sloppy job. Best idea is to use one of the many photoshop/paintshoppro plugins to compress to the required algorithm, then save them out in this compressed form. Then loading becomes much quicker than uncompressed textures, and you’ve more control over the final look of your textures.
If you leave it up to each and every driver to do the compression, you’re risking crap looking textures on your customers machines.

ok. thanks you all.

Decompression is fast and native to the graphics card. It happens when the texture fetch is made in hardware and it reduces the bandwidth required for texture fetches and may also improve texture cache and improves the available texture resolution storage capacity tremendously.

It does affect quality but for most natural images the sorts of algorithms used are pretty darned good. You should turn this on if performance and texture resolution are an issue, it’s easy to see that high res textures compressed may be preferable to lower res textures uncompressed.

Vendors like S3 provide tools and plugins for packages like photoshop to do the compression, however you should understand that the kinds of compression algorithms spec’d by graphics hardware are pretty trivial, it’s not like mpeg or something entirely arbitrary and complex. The specification is really a decompression one so you do have to make decisions about how you choose your endpoint samples to generate the interpolants, but it’s pretty limited w.r.t. what you can really do to stuff it up. You typically have a 4x4 image to deal with at a time (assuming you don’t use cell adjacency information in your algo). There’s definitely scope for good & bad compression through poor choices for the color interpolants in the cell.

I must admit, the nvidia driver does quite a good job of compressing, and does it far faster than the function I wrote. Makes me wonder if it uses the GPU in some way.