Error 1281 with weird rendering results.

So, I have created a batch renderer that should be able to handle individually textured sprites with different textures. Now, for some reason, I get a 1281 error within my fragment shader as to which I’m uncertain as to why. I’ll start by explaining the strange rendering happenings.

Effectively for whatever reason, whatever sprite is added to a layer first that texture is then applied to all sprites. Regardless of the sprites having correct textureIDs within the texture array thats used by the shader. Making me believe that the shader is doing something funny?

Say I have this scenario:

	
void main()
{
        Texture* ta = new Texture("test.png");
	Texture* tb = new Texture("test1.png");
	Texture* tc = new Texture("test2.png");
	TileLayer layer(&shader);

        Sprite* sp1 = new Sprite(2, 4, 0.9f, 0.9f, ta);
	Sprite* sp2 = new Sprite(8, 4, 0.9f, 0.9f, tb);
	Sprite* sp3 = new Sprite(2, 6, 0.9f, 0.9f, tc);
	Sprite* sp4 = new Sprite(5, 4, 0.9f, 0.9f, vec4(1, 0, 0, 1));
	layer.add(sp1);
	layer.add(sp2);
	layer.add(sp3);
	layer.add(sp4);
}

Quite a bit of my main superclass for working on getting things working is obviously cut out. But anyway, you should be able to see that 3 sprites have unique textures whereas one is being rendered untextured for the sake of making sure it worked.

The fragment shader looks like this:

#version 330 core

layout (location = 0) out vec4 color;



uniform vec4 col;
uniform vec2 light_pos;

in DATA
{
	vec4 position;
	vec2 uv;
	float tid;
	vec4 color;
} fs_in;

uniform sampler2D textures[32];

void main()
{
	float intensity = 1.0f / length(fs_in.position.xy - light_pos);
	vec4 texColor = fs_in.color;
	if(fs_in.tid > 0.0)
	{
		int tid = int(fs_in.tid +0.5);		
		texColor = texture(textures[tid], fs_in.uv);
		//texColor = vec4(tid, 0, 0, 1);
	}

	color = texColor * intensity;
}

Now, from what I can tell the error for 1281 (and I imagine the reason that the sprites are textured all with one texture) is within the line texColor = texture(textures[tid], fs_in.uv); I’ve checked both the tid and the UV values to make sure they aren’t out of range to which they both seem fine. TextureIDs are within the limit of the 32 samplers because there are only 3 possible textures, and there are no textures set with a TID of 0. The UVs are set correctly because well the sprites are drawn to the screen. Effectively what I’m looking for is a bit of help from someone knowledgeable as to why that specific line is throwing the error as well as conformation that I’m correct in my thoughts as to why all sprites have one texture. Because for the life of me I can’t figure out how there is an invalid value within the line.

If any more code is required to properly assess the issue, feel free to ask!

Don’t use float for indexing an array.

You can’t get an OpenGL error “within” a shader.

Only API calls generate errors. So you need to find out which call is generating it. That requires calling glGetError() immediately before and immediately after the call in question, and confirming that it returns 0 (GL_SUCCESS) before the call and GL_INVALID_VALUE (= 0x0501 = 1281) after it.

Errors with shader code are reported when compiling or linking them. You need to use glGetShaderiv(GL_COMPILE_STATUS) and glGetProgramiv(GL_LINK_STATUS) to detect such errors. glUseProgram() will generate a GL_INVALID_OPERATION error if the argument isn’t a program object (i.e. if the program didn’t compile successfully). Draw calls may generate errors if some aspect of the current state is incompatible with the program.

[QUOTE=GClements;1284664]You can’t get an OpenGL error “within” a shader.

Only API calls generate errors. So you need to find out which call is generating it. That requires calling glGetError() immediately before and immediately after the call in question, and confirming that it returns 0 (GL_SUCCESS) before the call and GL_INVALID_VALUE (= 0x0501 = 1281) after it.

Errors with shader code are reported when compiling or linking them. You need to use glGetShaderiv(GL_COMPILE_STATUS) and glGetProgramiv(GL_LINK_STATUS) to detect such errors. glUseProgram() will generate a GL_INVALID_OPERATION error if the argument isn’t a program object (i.e. if the program didn’t compile successfully). Draw calls may generate errors if some aspect of the current state is incompatible with the program.[/QUOTE]

Hm I see… I’ll have to find the API calls that were written most recently. Prior to making these changes, there wasn’t an error 1281. Thanks for the input!

So, I did as suggested and I’m now even more perplexed than before. I found an older version of my project. Where the batch renderer doesn’t have a 1281 error. I tried putting this batch renderer in place of my latest one and got a 1281 error… (I had it setup so that it wasnt using any of the newer added code of my latest batch renderer) this has obviously made me more confused… None of the newest API calls cause the error (my init() function has the most calls and they are all fine) but adding glGetError() to my begin, end or flush functions returns an error. The only thing that precedes these three functions is init()…? :confused: