GL_TEXTURE_2D_ARRAY: how to resize images ?

hi, i’m using a texture2Darray for model texturing
the upsides are clear: i can access at least 2048 (openGL 4.5) different layers (each a separate image)
that allows me to just bind that array once to a certain texture unit

struct Vertex {
vec3 position;
vec2 texcoords;
vec3 normal;
int materialindex;

struct Material
	vec4 Kd, Ks;
	float Ns{ 0.0f };
	int Index_Kd;
	int Index_Ks;
	int Index_Ns;
	//float _PADDING_[...]; not necessary

using that vertex layout, and a uniform block (materials) with many materials, i plan to use Index_Kd in the fragmentshader to sample from a certain image / layer of the texture2D array

my problem:
i get a “GL_INVALID_VALUE” when the image dimensions exceeds the (fixed) size of each layer (1024 x 1024)
(for example if image = 1150 x 700, an gl error occurs, because 1150 > 1024)

whats the “best” solution / method to resize an 2D image W x H into MAXWIDTH x MAXHEIGHT ?

another question:
how should i generate mipmaps for that texture2D array ? where should i call glGenerateMipMaps(…) ?

// textures
unsigned int mipmapcount = 1, width = 1024, height = 1024, layers = texturediffusearray.size() + 1;

glBindTexture(GL_TEXTURE_2D_ARRAY, texturearray);
glTexStorage3D(GL_TEXTURE_2D_ARRAY, mipmapcount, GL_RGBA8, width, height, layers);
for (unsigned int i = 0; i < texturediffusearray.size(); i++)
	int w, h, c;
	unsigned char* data = SOIL_load_image(texturediffusearray[i].c_str(), &w, &h, &c, 4);
	if (data)
		glTexSubImage3D(GL_TEXTURE_2D_ARRAY, 0, 0, 0, i, w, h, 1, GL_RGBA, GL_UNSIGNED_BYTE, data);
		std::cout << "ERROR: cant load texture: " << texturediffusearray[i] << std::endl;
glBindTexture(GL_TEXTURE_2D_ARRAY, 0);

Bind the desired layer of the array to a framebuffer then render a quad, sampling from the original 2D texture.

“Best” is subjective. Image resampling algorithms are a trade-off between accuracy and performance. The fastest approach is likely to be bilinear filtering, as that’s built into the hardware. If you want a more accurate approach, then calculate each fragment colour in the shader as a function of the nearby source texels using an appropriate filter (e.g. bicubic, Lanczos).

Call it once you’ve uploaded the data for all of the layers.

If you want to generate mipmap levels for specific layers, I believe that it’s possible to do this by creating a texture view for those layers and generating mipmap levels for the view. At least, I can’t find anything in the specification that precludes calling glGenerate[Texture]Mipmap() on a texture view.

how can i bind a certain layer of a GL_TEXTURE_2D_ARRAY to a framebuffer ?
i tried: (i = the layer / separate image)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_ARRAY, texture2Darray, 0);
glFramebufferTexture3D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_ARRAY, texture2Darray, 0, i);
glFramebufferTexture3D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D_ARRAY, texture2Darray, i, 0);

i get a GL_INVALID_ENUM every time, checking the framebuffer status gives me GL_FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT
(i tested it by replacing that line with a regular 2D texture, then it doesnt give me any error)

texture2Darray definitely exists an is already allocated, texparameters are already set, only attaching a certain layer fails … ?

is that correct ?
glFramebufferTextureLayer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, texture2Darray, 0, i);

EDIT2: ok, i got it now, thats correct :slight_smile: