Symbols are being ignored during linkage

I’ve had to start my code from scratch because too many changes rendered me unable to diagnose the source of a crash, now that I’ve finished with the basics (taking in lessons from the previous attempt and cherno vids), I find that symbols that I’m clearly using in the shaders are not being treated as “active” by the linker and thrown away, could someone have a look through my code to see if they can spot the cause because I’m struggling to understand how they’re being concluded as “inactive”, the first function to look through is “GfxHeartSetupProgram” in engine/heart.c since that is where I declare the shaders, also if you notice anything that could use some tweaking for the sake of including support of other graphics APIs then please let me know since I plan to try out vulkan, directx and metal at a later time.

The only symbol which is actually used is Place, as that’s used to determine gl_Position. Color and Texel are used to assign Pigment and Texture, but the fragment shader doesn’t actually use those so they are discarded and thus so are Color and Texel.

Even when uncommenting the line that uses them it still tells me there are 0 active symbols, going by what you said there should be 3 active symbols by that point and 1 in the situation you referred to but again it said 0 active symbols, any other ideas?

Edit: Is there any way to tell the compiler/linker to ignore if the symbols are perceived to be “active” and just declare them active anyways? TBH I think that was an incredibly stupid thing to include in the first place, if a developer is being stupid it is on them to remove unused symbols, not the compiler/linker

You’ll need to elaborate upon that. What is telling you what? Is compilation succeeding? Is linking succeeding? Are you checking for active uniforms or active attributes? Are you checking the correct program?

No. In any case, it’s a non-issue. If a shader doesn’t contain a uniform or attribute, don’t try to assign data to it.

As you’re using 4.3, you can just add explicit layout(location=...) qualifiers to the attributes and bind by location rather than by name.

But you still need to figure out why the shader program doesn’t contain attributes you expect it to contain.

This falls down for non-trivial cases, where the program is generated from multiple shaders (you can call glAttachShader multiple times for a single stage), each of which is generated from multiple sources (there’s a reason glShaderSource takes an array of strings rather than a single string) using conditional compilation (#if/#else/#endif). The programmer would need to effectively re-implement a large chunk of the compiler to determine which variables were actually used, even though the compiler already knows that.

I’ll reply to the first bit of your post before I continue reading:

dint FindVertexGpuSymbol
(
	GFX *Gfx,
	GFX_HEADER *Header,
	GFX_LAYOUT *Layout,
	GFX_SYMBOL *Symbol
)
{
	Symbol->Loc = glGetAttribLocation( Header->GfxId, Symbol->Name );

	if ( Symbol->Loc == (uint)-1 )
	{
		dint num = 0;
		FILE *out = Gfx->Blocks->Alloc->out;
		ECHO_ERROR( out, -1 );
		glGetProgramiv( Header->GfxId, GL_ACTIVE_ATTRIBUTES, &num );
		fprintf
		(
			out,
			"Unable to find vertex symbol '%s' from %d active symbols\n",
			Symbol->Name, num
		);
		return -1;
	}

	return 0;
}

Edit: I can’t be using layout because I want to follow the idea of editing shaders during runtime and re-compiling them, requiring to know the symbol locations beforehand means I would have to parse the code myself which just slows down the process of finding them.

As for you’re 2nd point I thought the array of strings was just so that one of each shader you need could be compiled all at once before linking them. I had just planned on generating a new program for each set of shaders and just switch between them on the fly, I’ve even designed my new code so shaders could be shared between programs reducing the need for compiling.

As for your point about the link and build status, this is where I check them:

dint MakeGfxHeader( GFX *Gfx, void *Chunk )

{
	GFX_HEADER *Header = Chunk;
	BLOCK *ShaderIDs = GrabBlock( Gfx->Blocks, Header->GfxShaderIDs );
	dint i, *shaderIDs = ShaderIDs->addr;
	uint id = Header->GfxId = glCreateProgram();
	Header->FreeId = true;

	for ( i = 0; i < ShaderIDs->used; ++i )
	{
		GFX_SHADER *Shader = GraspGfxShader( Gfx, shaderIDs[i] );
		glAttachShader( id, Shader->GfxId );
	}

	glLinkProgram( id );
	glGetProgramiv( id, GL_LINK_STATUS, &(Header->Built) );
	glValidateProgram( id );
	glGetProgramiv( id, GL_VALIDATE_STATUS, &(Header->Valid) );

	if ( !(Header->Built) || !(Header->Valid) )
	{
		ECHO_ERROR( Gfx->Blocks->Alloc->out, -1 );
		DebugGfxHeader( Gfx, Header );
		return -1;
	}

	return 0;
}

Found a problem with the shaders being incorrectly passed over to the program:

...
./a.out --engine-test
[ARGV] Checking argument './a.out'...
[ARGV] Checking argument '--engine-test'...
[GFX_MAIN_THREAD] Checking if should test image code...
Found Symbol 'Position' with location 0
Found Symbol 'PosColor' with location 1
Found Symbol 'PosTexel' with location 2
[GFX_MAIN_THREAD] Checking if should test engine code...
Checking shader 1 'assets/painter.glsl'
Checking shader 2 ''
engine/gfx_opengl_symbol.c:205: Error 0xFFFFFFFF (-1) 'Unknown error -1'
Unable to find vertex symbol 'Position' from 0 active symbols
...
Compilation failed.

Unfortunately I’m not seeing the reason why it’s seeing the wrong shaders, the 1st shader path should’ve read “assets/builder.glsl”, the 2nd should’ve read what the 1st reads, the weird part is that they’re both added the exact same way as I’m adding them via a loop:

dint GfxHeartSetupProgram( GFX_HEART *Heart )
{
	GFX *Gfx = &(Heart->Gfx);
	ALLOC *Alloc = Gfx->Blocks->Alloc;
	FILE *out = Alloc->out;
	dint i, err, header = AllocGfxHeader( Gfx );
	GFX_HEADER *Header = GraspGfxHeader( Gfx, header );;
	struct
	{
		uint Type;
		uch *Path;
	} shaders[] =
	{
		{ GFXB_POINT_SHADER, u8"assets/builder.glsl" },
		{ GFXB_COLOR_SHADER, u8"assets/painter.glsl" },
		{ 0, NULL }
	};

	for ( i = 0; shaders[i].Path; ++i )
	{
		dint j = AllocGfxShader( Gfx );
		GFX_SHADER *Shader = GraspGfxShader( Gfx, i );

		err = MakeGfxShader( Gfx, Shader, shaders[i].Type, shaders[i].Path );

		if ( err )
		{
			ECHO_ERROR( out, err );
			return err;
		}

		BindGfxShaderId( Gfx, Header, j );
	}

	return MakeGfxHeader( Gfx, Header );
}

BindGfxShaderId is all I can think of that might be causing the problem but when I look at it I see no potential faults:

dint BindGfxShaderId( GFX *Gfx, void *Chunk, dint ShaderId )
{
	GFX_HEADER *Header = Chunk;
	BLOCKS *Blocks = Gfx->Blocks;
	BLOCK *ShaderIDs = GrabBlock( Blocks, Header->ShaderIDs );
	dint i = WantChunk( ShaderIDs, sizeof(dint), 1 ), *I;

	if ( i < 0 )
	{
		ALLOC *Alloc = Blocks->Alloc;
		ECHO_ERROR( Alloc->out, Alloc->err );
		return Alloc->err;
	}

	I = GrabChunk( ShaderIDs, sizeof(dint), i );
	*I = ShaderId;
	return 0;
}

Any ideas?

Edit: Btw those “found” symbols from earlier in the post are the result of this addition to the shader construction code which searches the GLSL for relevant symbols:

	Locations->used = 0;
	
	for ( tok = strstr( code, "INDEX(" ); tok; tok = strstr( tok, "INDEX(" ) )
	{
		dint j = WantChunk( Locations, sizeof(GFX_SYMLOC), 1 ), k;
		GFX_SYMLOC *Loc = GrabChunk( Locations, sizeof(GFX_SYMLOC), j );
		char *end = strchr( tok, ';' );

		if ( !end )
		{
			err = EILSEQ;
			ECHO_ERROR( out, err );
			fputs( "';' Not found\n", out );
			return err;
		}

		*end = 0;

		if ( strstr( tok, "layout" ) )
		{
			Locations->used--;
			*end = ';';
			++tok;
			continue;
		}

		tok += 6;
		tok += sscanf( tok, "%u", &(Loc->loc) );
		++tok;

		if ( strstr( tok, " in " ) )
		{
			Loc->io = "in";
			tok += 4;
		}
		else if ( strstr( tok, " out " ) )
		{
			Loc->io = "out";
			tok += 5;
		}

		for ( k = 0; k < GFXT_COUNT; ++k )
		{
			GFXT *T = Gfx->Types + k;

			if ( !(T->type) )
				continue;

			if ( strstr( tok, T->type ) )
			{
				Loc->type = T->type;
				tok += strlen( T->type );
				break;
			}
		}

		Loc->name = ++tok;
		fprintf
		(
			out,
			"Found Symbol '%s' with location %u\n",
			Loc->name, Loc->loc
		);
		*end = ';';
	}

Edit 2: Found the source of the incorrect shader issue, turned out I had accidentally used the wrong index variable when grabbing the pointer from the index, fixed that, still have “0 active symbols”

… that makes no sense.

If your code uses glGetUniformLocation(prog, "uniform_name"), then you must already expect the program to have a uniform named “uniform_name”, yes? And you expect “uniform_name” to have certain specific properties (its type) which allows you to send it uniform data, yes? That name is hard-coded into your application, as are the expectations of that name’s uniform properties.

So why can you not simply use a number instead of a string? That’s all we’re talking about here: using a number, not a string.

To begin with there’s currently no uniforms being used so I’m not expecting those symbols to appear as yet but the static symbols linked to buffers, those are the ones currently giving me problems, secondly as I mentioned before I plan to enable editing shaders DURING runtime and hotswap the originals with the new code, and doing so without parsing the code means I have to forgo explicit locations in favour of automatic locations. The reason I must be able to edit them during runtime is for efficiency of testing, I’m going to be loading in models and scenes from Dragon Quest Builders 2 while I’m developing and that means I will have to edit the shaders many times most likely, I do not fancy shutting down the engine every 10-15 seconds until I get the shaders supporting what is expected of them by the DQB2’s assets

Okay something is definitely off, I tried putting in a fixed positions and color in the shaders to see if they’re actually outputting anything and I just get a black screen:

#version 430
#define INDEX(POS) layout(location=POS)

INDEX(0) in vec3 Position;
INDEX(1) in vec4 PosColor;
INDEX(2) in vec2 PosTexel;
/*
in vec3 Normal;
in vec3 Tangent;
in vec3 BiNormal;

struct CHANGE
{
	vec4 Tint;
	vec3 Spot;
	vec3 Span;
};

uniform CHANGE Strip;
uniform CHANGE Model;
uniform CHANGE Stage;
*/

out vec4 UseColor;
out vec2 UseTexel;

void main()
{
	uint id = gl_InstanceID;
	uint x = 1, y = 0;

	if ( id == 0 )
	{
		x = 0;
		y = 1;
	}

	gl_Position = vec4( Position, 1.0 );
	gl_Position = vec4( x * 1.0, y * 1.0, 0.0, 1.0 );
	UseColor = PosColor;
	UseTexel = PosTexel;
}
#version 430

in vec4 UseColor;
in vec2 UseTexel;
out vec4 FragColor;

uniform sampler2D Cover2D;

void main()
{
	FragColor = UseColor + texture( Cover2D, UseTexel );
	FragColor = vec4( 1.0, 1.0, 1.0, 1.0 );
}

Did try gl_FragColor on the second one but the compiler complains it’s undefined any ideas?

Edit: Just tried re-installing my system since I suspected I had a corrupt driver (some games weren’t displaying like they used to), didn’t resolve the “active symbols” or the lack of triangle on screen

Unless you’re rendering point primitives, it should give you a black screen (or whatever your clear color is).

You use the same position for every vertex in the same instance. gl_InstanceID only changes between instances, and it will only ever be non-zero if you use an instanced rendering call. If the positions for a triangle or a line are all the same, then no fragments will be generated for them.

Maybe you meant gl_VertexID rather than gl_InstanceID.

Even after switching to gl_VertexID I still get no triangle, any other ideas?

EdIt: Just occured to me that I only gave both 2 positions, whereas x should’ve had 3, I rectified that but still no triangle :expressionless:

#version 430
#define INDEX(POS) layout(location=POS)

INDEX(0) in vec3 Position;
INDEX(1) in vec4 PosColor;
INDEX(2) in vec2 PosTexel;
/*
in vec3 Normal;
in vec3 Tangent;
in vec3 BiNormal;

struct CHANGE
{
	vec4 Tint;
	vec3 Spot;
	vec3 Span;
};

uniform CHANGE Strip;
uniform CHANGE Model;
uniform CHANGE Stage;
*/

out vec4 UseColor;
out vec2 UseTexel;

void main()
{
	uint id = gl_VertexID;
	float x = 1, y = 0;

	if ( id == 0 )
	{
		x = 0;
		y = 1;
	}

	if ( id > 1 )
	{
		x = -x;
	}

	gl_Position = vec4( Position, 1.0 );
	gl_Position = vec4( x, y, 0.0, 1.0 );
	UseColor = PosColor;
	UseTexel = PosTexel;
}

I’ve tried everything I can think of, I made an example project to compare to, I put both shaders into one file, I can’t find any reasons for it failing, it’s as though the GPU is remembering the shader and instead of actually checking for changes is just declaring it the same as before :|.

#shader COLOR "Fragment"
#version 430

in vec4 UseColor;
in vec2 UseTexel;
out vec4 FragColor;

uniform sampler2D Cover2D;

void main()
{
	FragColor = UseColor + texture( Cover2D, UseTexel );
	//FragColor = vec4( 1.0, 1.0, 1.0, 1.0 );
}

#shader BASIC "Vertex"
#version 430
#define INDEX(POS) layout(location=POS)
#define FIXED(POS) layout(binding=POS)

struct CHANGE
{
	vec4 Tint;
	vec3 Spot;
	vec3 Span;
};

INDEX(0) in vec3 Position;
INDEX(1) in vec4 PosColor;
INDEX(2) in vec2 PosTexel;
INDEX(3) in vec3 Normal;
INDEX(4) in vec3 Tangent;
INDEX(5) in vec3 BiNormal;

uniform CHANGE Strip;
uniform CHANGE Model;
uniform CHANGE Stage;

out vec4 UseColor;
out vec2 UseTexel;

void main()
{
	uint id = gl_VertexID;
	float x = 1, y = 0;

	if ( id == 0 )
	{
		x = 0;
		y = 1;
	}

	if ( id > 1 )
	{
		x = -x;
	}

	gl_Position = vec4( Position, 1.0 );
	//gl_Position = vec4( x, y, 0.0, 1.0 );
	UseColor = PosColor;
	UseTexel = PosTexel;
}

Then stop looking at the code and start looking at the data. Run the program under a debugger, set breakpoints, look at what is being passed to OpenGL functions and what they’re returning.

In all probability, the issue has nothing to do with OpenGL and everything to do with the code which is calling it.

Already checked that too, printed the whole code being passed to it, there is this one warning during compile of the C code though:

engine/opengl/shader.c:197:31: warning: initialization of ‘char * const*’ from incompatible pointer type ‘const char **’ [-Wincompatible-pointer-types]

Which is this line:

char * const *codes = &(Shader->Code);

Can’t imagine that’s an issue for opengl, this is the block that uses it later in that same function:

	Shader->GfxId = id = glCreateShader( Shader->Type );

	if ( id == (uint)-1 )
	{
		err = -1;
		ECHO_ERROR( out, err );
		return err;
	}

	Shader->FreeId = true;
	glShaderSource( id, 1, codes, NULL );
	glCompileShader( id );
	glGetShaderiv( id, GL_COMPILE_STATUS, &(Shader->Built) );

Edit: By the way I’m going to bed so don’t expect any more responses from me tonight

Figured out why it was reporting 0 active symbols, that was a slip up on my part where I printed the amount prior to querying the count. How ever that didn’t resolve the fact that the symbols go unlocated, also what should I use instead of gl_FragColor because every time I set #version 430 the compiler complains it is undefined, anyways here’s the current state of the shaders:

#shader BASIC "Vertex"
#version 330
#if 1
#define INDEX(POS) layout(location=POS)
#define FIXED(POS) layout(binding=POS)
#else
#define INDEX(POS)
#define FIXED(POS)
#endif

out vec2 DyeTexel;
out vec4 DyeColor;

struct CHANGE { vec4 Color; vec3 Point, Width; };

INDEX(0) in vec2 PosTexel;
INDEX(2) in vec4 PosColor;
INDEX(6) in vec3 PosPoint;
INDEX(9) in vec3 Normal;
INDEX(12) in vec3 Tangent;
INDEX(15) in vec3 BiNormal;

/* Joint, Model, Stage */

void main()
{
	int id = gl_VertexID;
	float x = 1.0, y = 0.0;

	if ( id == 0 )
	{
		x = 0.0;
		y = 1.0;
	}

	if ( id > 1 )
	{
		x = -x;
	}

	gl_Position = vec4( PosPoint, 1.0 );
	//gl_Position = vec4( x, y, 0.0, 1.0 );
	DyeColor = PosColor;
	DyeTexel = PosTexel;
}

#shader COLOR "Fragment"
#version 330

in vec2 DyeTexel;
in vec4 DyeColor;

//uniform sampler2D Texture;

void main()
{
	//gl_FragColor = DyeColor + texture( Texture, DyeTexel );
	gl_FragColor = vec4( 1.0, 1.0, 1.0, 1.0 );
}

Managed to get it to locate the symbols (and implemented a fallback before resorting to outright failure) but I still have no triangle even after clamping the values:

#shader BASIC "Vertex"
#version 330
#if 0
#define INDEX(POS) layout(location=POS)
#define FIXED(POS) layout(binding=POS)
#else
#define INDEX(POS)
#define FIXED(POS)
#endif

out vec2 DyeTexel;
out vec4 DyeColor;

struct CHANGE { vec4 Color; vec3 Point, Width; };

INDEX(0) in vec2 PosTexel;
INDEX(2) in vec4 PosColor;
INDEX(6) in vec3 PosPoint;
INDEX(9) in vec3 Normal;
INDEX(12) in vec3 Tangent;
INDEX(15) in vec3 BiNormal;

/* Joint, Model, Stage */

void main()
{
	int id = gl_VertexID;
	float x = 1.0, y = 0.0;
	vec4 p = vec4( PosPoint, 1.0 );

	if ( id == 0 )
	{
		x = 0.0;
		y = 1.0;
	}

	if ( id > 1 )
	{
		x = -x;
	}

	if ( p.x < 0.0 || p.x > 1.0 )
		p.x = x;

	if ( p.y < 0.0 || p.y > 1.0 )
		p.y = y;

	gl_Position = p;
	DyeColor = PosColor;
	DyeTexel = PosTexel;
}

#shader COLOR "Fragment"
#version 330

in vec2 DyeTexel;
in vec4 DyeColor;

uniform sampler2D Texture;

void main()
{
	vec4 color = DyeColor + texture( Texture, DyeTexel );
	for ( int i = 0; i < 4; ++ i )
	{
		if ( color[i] < 0.5 || color[i] > 1.0 )
			color[i] = 1.0;
	}
	gl_FragColor = color;
}

Edit: The only information I’ve been able to find in regards what to use instead of gl_FragColor lead to code like this (highly doubt I used GL_MAX_DRAW_BUFFERS correctly)

void BindColorShaderVarAll( GFX *Gfx, dint HeaderId, char *name )
{
	GFX_HEADER *Header = GraspGfxHeader( Gfx, HeaderId );
	dint i, max = GL_MAX_DRAW_BUFFERS;

	for ( i = 0; i < max; ++i )
		glBindFragDataLocation( Header->GfxId, i, name );
}

Edit 2: Found better info, knew I used that value wrong

void BindColorShaderVarAll( GFX *Gfx, dint HeaderId, char *name )
{
	GFX_HEADER *Header = GraspGfxHeader( Gfx, HeaderId );
	dint i, max;

	glGetIntegerv( GL_MAX_DRAW_BUFFERS, &max );

	for ( i = 0; i < max; ++i )
		glBindFragDataLocation( Header->GfxId, i, name );
}

Finally found where I was going wrong:

void PaintGfxStrip( GFX *Gfx, dint StripId, dint ImageId, uint Draw )
{
	GFX_STRIP *Strip = GraspGfxStrip( Gfx, StripId );
	BLOCK *CacheIDs = GrabGfxStripCacheIDs( Gfx, StripId );
	GFX_IMAGE *Image = GraspGfxImage( Gfx, ImageId );
	GFX_CACHE *Cache = GraspGfxCache( Gfx, Image->CacheId );
	dint LayoutId = Strip->LayoutId;
	GFX_LAYOUT *Layout = GraspGfxLayout( Gfx, Strip->LayoutId );
	dint i, *cacheIDs = CacheIDs->addr;


	BindGfxLayout( Gfx, LayoutId );
	glEnableVertexAttribArray( Layout->GfxId );

	BindGfxBuffer( Gfx, Cache->BufferId );

	for ( i = 0; i < CacheIDs->used; ++i )
	{
		Cache = GraspGfxCache( Gfx, cacheIDs[i] );
		BindGfxBuffer( Gfx, Cache->BufferId );
		SendGfxSymbol( Gfx, LayoutId, i );
		glDrawElements( Draw, Strip->points, GFXB_UINT, NULL );
	}

	//glDisableVertexAttribArray( Layout->GfxId );
}

I had the glDrawElements() call outside the loop, didn’t realise I needed it inside the loop. Since I have the function up anyways do you have any suggestions for efficiency improvements?