Array Texture Data Manager lib ?

hi,

i’m trying to put all images i need into array textures, for example:

Array texture 1:
– rgba8, 1024 x 1024
– for diffuse and specular textures

Array texture 2:
– r8, 1024 x 1024
– for alpha / height / other things that only need 1 channel

etc …

there is a problem however: memory waste.
what if i have a diffuse texture that is only 512 x 512 ? that will fit into a layer, but only 25% are acually needed. a solution would be a class that manages texture layers efficiently. is there a lib or example where something like this is implemented ? i mean that would be very useful … with that, i bind the array texture once and let them bound for the time the app runs, so that i can access them with a layer index (part of material) + texcoord (part of mesh)

i’m trying to write such a manager but i dont know how to do it (yet) :dejection: (i’m trying it too general in applicability)

example interface:

manager.append("image1.dds", width, height);
manager.append("image2.png", width, height);
...
int layers = manager.getLayers();
glTexStorage3D(... layers, ...)

for each layer
-- manager.getLocationOf("tex1");
-- glTexSubImage3D(....)

I believe that you might be interested with texture atlases. Google for it.
Alternatively, I found that nvidia page, but I’m totally unsure if that will help you…
And as you well spotted it you can use glTexSubImage, which, I also believe, is what people are most commonly using/doing. Play with it a bit :slight_smile:

thanks for you reply!

the issue with “predefined” 2d images is that some artifacts can appear when generating mipmaps, so i thought that it would make sense to separate those images that are power-of-2 resolution and make layers of them, and if necessary, all the others with arbitrary resolution into other layers. the latter is quite difficult to manage (at least for me :confused:). then there are somtimes images with e.g. 512 x 256, power-of-2 images which arent quads which makes the task even more complicated …

Mipmapping issues are only due to bad interpolation at the pixels at the boundaries of an image and another one. The pixels of the other image should not have been taken into account for calculating a new mipmapping level of the first image.

You have two solutions for this: separate the images with several empty pixels, but that will lead to some space waste. The other one is to generate the mipmaps yourself. Or half-yourself: generate a single texture with mipmaps with OpenGL, retrieve its data for each mipmap levels, then fill the atlas for each level with these data.

For what you encounter with non squared pot textures, search for texture packing algorithms. This discussionwill give you some hints.

You can’t avoid bleeding between images within an atlas if you allow scale factors where the image is smaller than a pixel. You can use glTexParameter() with GL_TEXTURE_MAX_LEVEL or GL_TEXTURE_MAX_LOD to prevent this situation from occurring.

As texture parameters are set for entire textures (rather than layers), this does prevent “full-size” images from having a complete set of mipmaps. But this shouldn’t be a significant issue if you’re packing e.g. 16 (4x4) 256x256 images into a 1024x1024 layer. It can be an issue if you try to bundle small “sprites” along with much larger images.