Texture coordinates produce unexpected result

Hello. I render a cube. The texture is a pure blue color.

In the fragment shader, I set the following condition to control the quality of the texturing.
But instead of an expected pattern (reference to the screenshot), I get something strange.
I also checked the uv-coordinate input, it seems to be correct, every 3 uv-pairs(x,y) form a triangle.

There are 36x2 uv pairs that seems to be correct. The model is generated in blender.
The error seem to be about memory etc.

#version 430 core
in vec2 TexCoords; 
layout (binding=0) uniform sampler2DArray textureArray;
//layout (location=1) uniform int layer;
out vec4 color;
void main(void)  
{
    color = vec4(texture2DArray(textureArray, vec3(TexCoords.xy, 0.0f)));

    if((TexCoords.x >= 0.1f) && (TexCoords.x <= 0.9f))
    {
        if((TexCoords.y >= 0.1f) && (TexCoords.y <= 0.9f))
        {
            color = vec4(1.0f,1.0f,1.0f,1.0f);
        }        
    }   
}

The UV-coordinates are loaded this way:

// texture_coordinate
	glGenBuffers(1, &obj.glbuffers[UV_BUFFER]);
	glBindBuffer(GL_ARRAY_BUFFER, obj.glbuffers[UV_BUFFER]);

	glBufferData(GL_ARRAY_BUFFER,
		sizeof(GLfloat)*obj.uv_coordinates.size(),
		&obj.uv_coordinates[0],
		GL_STATIC_DRAW);
	glVertexAttribPointer(5, 2, GL_FLOAT, GL_FALSE, 0, NULL);
	glEnableVertexAttribArray(5);

[ATTACH=CONFIG]1772[/ATTACH]

bump. I checked the UV-coordinate array - the UV-coordinates are correct, the form the triangles. But the rendered cube has messed up textures.

From the picture, it’s clear that the texture coordinates don’t match the positions; diagonally-opposed vertices should have diagonally-opposed texture coordinates (e.g. (0,0) and (1,1) or (1,0) and (0,1)) but in some cases they are sharing an edge (i.e. both have the same s coordinate or the same t coordinate).

Incidentally:

The texture2DArray() function doesn’t exist in standard GLSL, only in the EXT_texture_array extension. 2D array textures were introduced in GLSL 1.3, which also deprecated the type-specific texture functions in favour of overloads, so a type-specific function for 2D array textures was never added to GLSL.

Either use #extension GL_EXT_texture_array to enable the extension, or (preferably) use texture() instead.

Thank you for your answer. I have replaced texture2DArray with texture. Interestingly, that texture2DArray functioned normally without any extensions.

About textures: I have drawn the cube on a paper, drawn all vertices and UV-coordinates. The vertex indexed vector [36x1 GLushort],
the vertex position vector[8x3 GLfloat] and the UV-coordinate vector[36 x 2 GLfloat] are correct.

The cube is rendered by a command glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, 0);

The vertex shader is as follows:

#version 430 core

layout(location = 0) in vec4 position;
layout(location = 5) in vec2 texCoord;

uniform mat4 mv_matrix; 
uniform mat4 proj_matrix; 

out vec2 TexCoords;

void main(void)   
{
   gl_Position = proj_matrix * mv_matrix * position;
   TexCoords = texCoord;
}

The buffer and attribute setup is as follows:

// texture_coordinate
	glGenBuffers(1, &obj.glbuffers[UV_BUFFER]);
	glBindBuffer(GL_ARRAY_BUFFER, obj.glbuffers[UV_BUFFER]);

	glBufferData(GL_ARRAY_BUFFER,
		sizeof(GLfloat)*obj.uv_coordinates.size(),
		&obj.uv_coordinates[0],
		GL_STATIC_DRAW);
	glVertexAttribPointer(5, 2, GL_FLOAT, GL_FALSE, 0, NULL);
	glEnableVertexAttribArray(5);

	//vertex_position;
	glGenBuffers(1, &obj.glbuffers[POSITION_BUFFER]);
	glBindBuffer(GL_ARRAY_BUFFER, obj.glbuffers[POSITION_BUFFER]);
	glBufferData(GL_ARRAY_BUFFER,
		sizeof(GLfloat)*obj.geometry_library_v.size(),
		&obj.geometry_library_v[0],
		GL_STATIC_DRAW);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, NULL);
	glEnableVertexAttribArray(0);

	//GLuint index_buffer;
	glGenBuffers(1, &obj.glbuffers[INDEX_BUFFER]);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, obj.glbuffers[INDEX_BUFFER]);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER,
		sizeof(GLushort)*obj.vertex_indexes.size(),
		&obj.vertex_indexes[0],
		GL_STATIC_DRAW);

Clearly they aren’t, as you have 8 sets of spatial coordinates but 36 sets of texture coordinates.

The indices in the element array are used to index into all of the attribute arrays simultaneously. If the indices range from 0 to 7, then you’re only using the first 8 sets of texture coordinates. And if the indices exceed 7, then you’re indexing beyond the end of the position array.

If you’re planning on being able to map any face to any part of the texture, then you’ll need 24 (6x4) distinct vertices, i.e. 24x3 floats in the position array and 24x2 floats in the texture array. That number can be reduced if you have some shared edges (where vertices with the same position have the same texture coordinates), but you’ll still need at least 14 distinct vertices (you can’t have more than 5 shared edges for a sane texture mapping).

Thank you for your answer.
What is the industry practice for texture mapping? It seems that the trivial DrawArrays is much easier than the DrawElements setup.
For example, if the object geometry was much complex than a cube, how do people write it to buffers and texture-map it?

[QUOTE=wenhaug;1291335]
What is the industry practice for texture mapping? It seems that the trivial DrawArrays is much easier than the DrawElements setup.
For example, if the object geometry was much complex than a cube, how do people write it to buffers and texture-map it?[/QUOTE]
glDrawElements() allows vertices to be shared, which you normally want to do unless you’re generating disjoint points/lines/triangles. For “smooth” meshes (which are meant to approximate a mostly-continuous surface, rather than geometric solids), most vertices will share all of their attributes; only vertices on sharp edges or texture seams will have different attributes on different sides of the edge. As the mesh resolution increases, the total number of vertices increases quadratically while the number of “split” vertices increases only linearly. “Models” loaded from files invariably use glDrawElements() (or similar). glDrawArrays() is mainly for generated data with a linear topology (point clouds, line strips/loops, triangle/quad strips used to render a line of text, etc).

If you’re loading something like OBJ, where each vertex has separate indices for position, normal and texture coordinates, you need to maintain a mapping between OBJ vertices and OpenGL vertices (where a single index is used for all attributes). If several OBJ vertices have the same tuple of indices, they’d all be mapped to the same OpenGL vertex. Typically you’d use an associative array (aka dictionary, keyed map, hash table) to map OBJ index tuples to OpenGL indices, and a vector to hold the reverse mapping. The forward mapping is used to convert the face definitions to an element array, the reverse mapping is used to convert the OBJ position/normal/texture coordinate arrays (which will have different sizes) to OpenGL attribute arrays (which must all have the same number of elements).

Most closed surfaces need to have texture seams to get a sane mapping. Genus-0 objects (blob-like, with no loops) can avoid this if the texture mapping is symmetrical (or they can use cube maps), but even that tends to result in large variations in texture scaling.