Texture format for speed

What is the absolutely best, most perfect, texture format for cards that don’t support texture compression? Most programs use BMP’s, but they are so huge… TGA’s also suffer from this large headedness, and RAW’s are also prety funky.
One thing about BMP’s is that they have the RLE compression. TIFF’s support lossless compression too right? But would that just baloon the texture memory after you open it?
The quake 3 engine runs JPEG’s, but only for a few things. One of the few things I know is that certain cards will not process certain formats as-is. It converts it to a more familiar format. IE: the card’s drivers would change a jpeg to bmp in video mem so that it works with the card better.
PNG’s are awful nice. Not only are they small due to their wonderful lossless compression, but they are 24 bit too. But do video cards also change them to bmp?
So what am I to do if I want to run 50 different textures on screen?

Once you load the texture, your only formats are whatever compression the card supports and uncompressed. There are various paletted texture formats as well as lowering the bit-rate of the texture.

If you want 50 textures on screen, fine. At 25625632-bit, 50 textures is only around 13MB, which will fit comfortably in most 32MB video cards. If your card is older and has less memory, you may have to drop down to smaller textures or lower their color depth.

Just use simple math calculations to see how big your textures will be in memory. Once you do that, it’ll be easy to tell how many textures can fit in video memory.

compressed formats eg jpg,png
+take up less space on your disk
-take more time to decompress
-some formats eg jpg are lossy thus u lose a bit of the images quality.

uncompressed eg tga,bmp (typically)
+less time to load (no decompression)
-take up more space on the hd

opengl see’s them both the same when u give it the data. ie u give it a 256x256x4 array of colour values , it doesnt mind where that came from.

personally i use tga’s for testing purposes cause my program loads up quicker, though u might find if u distribute it over the internet a compressed format serves u better

thanks, I thought that compressed images took longer to load. Thanks for verifying that. BTW, can’t you go above 256256? I mean, think of a huge quad, scaling 3000 pixels high and wide, then, a small 256256 bitmap would look awful. But that example is for example’s sake, no one would actually make a quad that huge

Hmmm… not sure I agree with you there…
ever played Quake 3 from the CD? Takes AGES! to load… I’m pretty sure a fast cpu can decompress alot quicker than any hard drive can read up the size of the uncompressed data.

(unless the decoder is ****e)

This is especially true for CD based platforms (consoles).

Make your own format… and store only what you have to… and then compress that… That will give you the best speed/size/quality ratios.


heheh, a compressed texture library. That is what I was thinking of, like UT. But I do not know how to make my own format, even though it’s structure is simple, I couldn’t do it cuz I don’t know what the first thing. I guess that makes me a beginner. But could you direct me to a resource where I could learn how?

You could e.g. use a ziplib. This puts all your textures in a zip file and you can extract them from there. The good thing here is, that you can edit the files with e.g. WinZIP.

If you want to really do it on your own, use run length encoding (easy) or huffman (which is better, but more complicated). A search for this terms should work.

but, what format would save space in memory? Yeah, a ziplib is cool. Quake3 uses one. And winzip has a nice directory format. So maybe I oculd do that. But I was thinking of a format that would input values into a display list loop for every texture in a library.

huffman encoding sound complicated… it represents frequent code into a list of most frequent code and then represents them with smaller values. Am I right?

ZLib (not zipLib) is available for free (open source), relatively easy to use, and the compression rate can be adjusted in 5 steps. From what I understand it uses a RLE/Huffman combination.
After the huffman, you could probably transform the compressed data and apply an LZSS (the free LZW alternative) to it and crank up the compression a little more. The zLib as well as the LZSS compression are efficient and very fast.

Thanks for the info! I had no clue that it was open source. Should have searched it

Was just there, so I thought I’d make sure you were in the right direction http://www.info-zip.org/pub/infozip/zlib/ …I would sugest you just use the seek command along with standard reading and opening commands for a very efficient way to use the library (if you’re going to have many textures in one file)

thanks! all relevant info has and always will be appreciated