lightmap textures with 3 ch. instead of 1? why?

So I’m doing this lighting system using lightmaps with shaders. Reading articles and tutorials around the web I found them all using textures. What I need is only one channel to indicate how much light passed through, so why all those examples use full 3 or 4 channels (RGBA) and waste memory? Sine one channel textures don’t exist (right?), shouldn’t we use data arrays rather then textures for this to not waste memory?

If you place a red lamp near a wall, the wall isn’t illuminated in white. If you then also place a green lamp nearby, some parts of the wall are illuminated by both lamps, producing orange.

Yes that is correct but my lighting doesn’t have colored lights so I don’t need the extra 2 channels.

I’ve found that there were 1 channel textures… GL_ALPHA, GL_RED, … and such but they are deprecated.
Now you can create textures from 1 to 3 components:

Image formats do not have to store each component. When the shader samples such a texture, it will still resolve to a 4-value RGBA vector. The components not stored by the image format are filled in automatically. Zeros are used if R, G, or B is missing, while a missing Alpha always resolves to 1.

OpenGL has a particular syntax for writing its color format enumerants. It looks like this:


The components field is the list of components that the format stores. OpenGL only allows “R”, “RG”, “RGB”, or “RGBA”;

Use GL_R8, that’s a one component internal format and it is not deprecated, then in your shader use:

vec3 lightMapValue = texture(…).rrr;

To answer your other question: a data array lookup in a shader (particularly in a fragment shader) is likely going to be much slower than a texture lookup. Your GPU has specialised hardware for doing texture lookups fast, whereas a data array lookup is a more general operation that is less likely to be optimized on all hardware.

You seem somewhat fixated on not wanting to “waste memory” and that’s a fixation you’re going to need to lose. You’re going to find that there are plenty of situations where memory usage is far from being everything, and you will occasionally have to use extra memory in order to get things working well (or working at all). Programming to the GPU is not the same as programming to the CPU and rules often change. Examples include padding vertex structs, using 4-component textures instead of 3-component, etc.