Need explanation for glDrawArraysInstanced()

Hello guys,
While experimenting with OpenGL It seems I do not understand something with GlDrawArraysInstanced()

I want to achieve: 1INT of an Array = 1 COLOR = 1 INSTANCE = 2TRIANLES = 1 SQUARE

I have a map that fits my screen with integers, each representing a color (4 differents available):

        int map_size = tile_x*tile_y;
	int *mapmat = new int [map_size];

	for (int i = 0; i < map_size; i++) {
		mapmat[i] = (i % 4);
	mapmat[0] = 3; //3 is grey
	mapmat[1] = 2; //2 is blue
	mapmat[2] = 1; //1 is green
	mapmat[3] = 0; //0 is red
	mapmat[4] = 3;
	mapmat[5] = 1;

I feed it into a buffer and bind it to a VAO:

        GLuint vbo;
        glGenBuffers(1, &vbo);
        glBindBuffer(GL_ARRAY_BUFFER, vbo);
        glBufferData(GL_ARRAY_BUFFER, sizeof(mapmat), mapmat, GL_STATIC_DRAW);

	GLuint shaderProgram = graphic_engine::compile_program("shader/simple.vs","shader/simple.fs");

	GLuint matAttrib = glGetAttribLocation(shaderProgram, "mat");
	glVertexAttribIPointer(matAttrib, 1, GL_INT, 0, 0);

here are about my vertex and fragment shaders:

#version 150 core //VERTEX SHADER
#extension GL_ARB_explicit_attrib_location : require

layout (location = 0) in int mat;
layout (location = 1) uniform vec4 screen; //rx,ry,tile_x,tile_y //some screen settings used only set once ...

out VS_OUT
	vec4 color;

	// triangles layout
	// 2------1
	// |        /|
	// |      /  |
	// |    /    |
	// |  /      |
	// |/        |
	// 0------3
	const int indexes[6]={0,1,2,0,3,1};

void main() {

	float rx = screen[0];
	float ry = screen[1];

	int tx = int(screen[2]);
	int ty = int(screen[3]);

	float ox = -1.0 + rx*(gl_InstanceID % tx);
	float oy = 1.0 - ry*(gl_InstanceID / tx) ;

	vec4 vertices[4]= vec4[4](

	const vec4 colors[4]= vec4[4](

	gl_Position = vertices[indexes[gl_VertexID]]; //array of the 6 needed vertices
	vs_out.color = colors[mat]; //the color corresponding to the map

#version 150 core //FRAGMENT SHADER

	vec4 color;

uniform vec4 screen; //rx,ry,tile_x,tile_y

out vec4 colorout;

void main() {

	colorout =vs_out.color;

I call this each frame:

glDrawArraysInstanced(GL_TRIANGLES, 0,6, tile_x*tile_y);

and here is my output:

What do I miss Here ?
I don’t really understand when the Fragment shader gets data, it’s supposed to be per vertex, and I already achieved to put one color per vertex, but what if I want the same color for the 6 vertices of a square ?

once you call “glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100)”, opengl has to execute a “glDrawArrays(GL_TRIANGLES, 0, 6)” command 100x times, you can use the built-in variable “gl_InstanceID” in the vertexshader which will be 0…99 in this case

once a “glDrawArrays(GL_TRIANGLES, 0, 6)” command is called, you invoke the vertexshader 6 times
GL_TRIANGLES means that each 3 vertices, the vertices for a triangle primitive will be send to the rasterizer, it determines what pixels are covered by the triangle, and on each covered pixel the fragmentshader executes

for example, if you have 1 triangle that covers half of a 1024 x 1024 window, the vertexshader will only be invoked 3 times, but the fragmentshader will be invoked 1024 x 1024 / 2 = 524.288 times

you can pass data from the vertexshader to the fragmentshader, like a “vec4 color” value, that value gets interpolated within the triangle (if you dont set another interpolation qualifier explicitly, like “flat”). “integer type” variables (like int or uint) cant be interpolated, so you have to either pass them with the “flat” qualifer (that means only the last passed value will be used by ALL pixels of that triangle) or use another way to access that information in the fragmentshader

by the way: did you check for GL errors and shader compilation / linking errors ?

Thanks for the answer,

yeah I am checking both compile and linking errors and none occurs
I also use glIntercept, a windows tool to get more info about OpenGL errors and the logs show nothing.

I am aware of Vertex Provoking and have already tried the “flat” qualifiers, it doesn’t change anything ,
does it requires an extension on GLSL core 150 to work properly ?

The most frustrating is passing gl_InstanceID to the fragment shader as an flat int in attribute and doing so works:

colorout = colors[gl_InstanceID % 4]

(it produces a nice 4 colored checker board)

but passing the int “mat” instead produces the same strange output.
That means, “mat” isn’t sent from the vertex to the fragment shader properly for some reason

gl_InstanceID exists in GLSL 1.50 core profile.

One obvious issue with your code is that the buffer size is wrong:

[var]sizeof(mapmat)[/var] will be the size of a pointer, not the size of the array it points to. Use [var]map_size*sizeof(int)[/var], or use a std::vector rather than a bare array.

The obvious one’s are always the most frustrating, you are my savior.

I experimented a bit more with this size,stride, offset mess and understood the whole thing better, thank you !