lets say I have a level with many lights that use shadow mapping. At any time only a few lights are active, since I have a decent occlusion culling algoirthm for the lights.
But creating the shadow maps leads to very very many textures. Does anyone have an opinion on gathering all the lights together and packing them into a few larger texture maps using standard 2d bin packing? Good idea, or bad (premature optimization or likely a good one)?
one issue is that when you switch a depth texture from render destination back to a texture, then you will have some decompression overhead.
so, if you chose that implementation, you want to render all the shadow maps together, and not alternate reading/writing operations.
if you have only a few active lights, why do you need to have so many shadow map textures ? it seems that you can reuse your shadow map textures, and not allocate 1 shadow map per light.
Okay, this should not be a problem, since the rendering pass for the shadow maps will happen before the drawing pass, which uses them as textures.
Hmmm, I hadn’t thought of this angle. So I would have a bucket of in-use depth textures that I assign to lights as the lights become active. A bit more complex to code, but I see the scaling benefit.
So I would grow (maybe shrink) the number of in-use textures as the number of active lights grows and wanes, giving it my best up front guess to the maximum number of active lights.
Nice idea, thanks!
Another problem that I will have to deal with, I guess, is that right now per light my shadow maps are kept in a texture array. This makes it very simple to deal with in the shader code (I don’t have to battle with arrays of texture samplers) (think 6 textures for a point light, or n for cascading shadow maps). Spotlights are easy, 1 texture.
But perhaps my cache of active textures could in also be an array…