Every texture in my game has a bit depth of 8, meaning there are at most 256 unique colours in each texture.
This means I could store the index of each pixel’s colour in the texture, rather than the entire RGB value for each pixel (i.e. change the texture format from
R8). This index could then be used to select the actual colour from a lookup table, which would be stored in a uniform array, SSBO, 1D texture, etc.
The downsides are that texture filtering would break (can’t interpolate between colour indices), and using the result of one texture lookup to read from another texture is slow.
EXT_paletted_texture, but support for it has been dropped. Are there any modern solutions to this? Reducing texture VRAM by 66% would be great.
Palettes are a memory vs. performance tradeoff. Palettes require 2 reads to get a color. That’s just how it is. You save memory, but lose performance.
So you should benchmark it to see if the gain is worth it.
And yes, you’re going to have to do this manually, including filtering. There’s no hardware support for palettes these days.
Is texture memory usage a problem for you? No? Then the simplest approach is to do nothing.
Remember, video memory is a resource to be used, so if you have, say 4gb of video memory, and you’re only using 256mb, then the rest of it is being wasted. Trying to reduce your usage even further is not going to give you any benefit.
This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.