Uniform Buffer Storage Qualifier Error

I’m introducing multiple shaders into my game, so I want to use Uniform Buffers so I can pass information like resolution or an orthographic matrix to all of my shaders at once. Until now, I have been successfully passing information via uniform values:

#version 330 core

uniform mat3 ortho_matrix;

layout(location = 0) in vec2 position;
layout(location = 1) in vec2 sampler_uv;

out vec2 pixel;
out vec2 uv;

void main() {
	gl_Position = vec4(vec3(position, 1.0) * ortho_matrix, 1.0);

	pixel = position.xy;
	uv = sampler_uv;
}

After I introduced my uniform buffers, this is how the code looks:

#version 330

layout(location = 0) in vec2 position;
layout(location = 1) in vec2 sampler_uv;

layout(std140) uniform buffer {
	vec2 resolution;
	mat3 ortho_matrix;
};

out vec2 pixel;
out vec2 uv;

void main() {
	gl_Position = vec4(vec3(position, 1.0) * ortho_matrix, 1.0);

	pixel = position.xy;
	uv = sampler_uv;
}

I haven’t gotten to test my client-side code yet because my shaders won’t compile. When I try, I get the following error:

Vertex shader failed to compile with the following errors:
ERROR: 0:7: error(#392) At most one: storage qualifier is allowed
ERROR: 0:7: error(#132) Syntax error: "{" parse error
ERROR: error(#273) 2 compilation errors.  No code generated

I’ve searched everywhere but I haven’t been able to find information about the storage qualifier error. I suspect that there is some kind of conflict between the layout specifiers for my attribute indexes and the one for my uniform buffer, but I haven’t been able to find any online examples that use both at once. It’s necessary that I have the attribute indexes because I’m using interleaved vertex buffer data to improve performance. If it isn’t possible to have both attribute indexes and uniform buffers, I’ll just ignore the latter and manually pass uniforms to my shaders.

“buffer” became a keyword in GLSL 4.3 (for SSBOs). And if your shader compiler doesn’t pay too much attention to the #version declaration, it can get very confused. Basically, the compiler is in the wrong (since you specified 3.30), but it’s not surprising that it did this.

It’s best to name your stuff something meaningful so that it doesn’t cause these problems. Something like “viewport_description” or whatever.

Okay, I changed the name from buffer to ubuffer and the shader compiles! But now I’m having difficulty with the client-side portion of the code. I’ve looked at several online tutorials like this that explain the process of getting the block index, setting your binding point, generating the buffer and updating the buffer but I don’t think I’m doing it correctly.

Either I’m doing things out of order or I’m missing a line of code. When I boot up my game, I get a completely black screen. This means that the ortho_matrix wasn’t correctly passed to the shader and things aren’t rendering in the viewport. The first step I’m doing is linking my shaders then immediately binding the block to them:

glLinkProgram(program);
block_id = glGetUniformBlockIndex(program, "ubuffer");
assert(block_id != GL_INVALID_INDEX);
glUniformBlockBinding(program, block_id, 0);

Later in my code I generate the buffers, update the buffer data then push it to the GPU. uniform_init, uniform_set_resolution, uniform_set_ortho, uniform_update and uniform_quit are called, in that order.

#include <GL/glfunc.h>
#include "assert.h"
#include "string.h"
#include "shader.h"
#include "uniform.h"

/** Defines data that will be held by shaders. */
typedef struct {
	float resolution[2];
	float ortho_matrix[9];
} BUFFER;

/** Holds data that will be passed to shaders. */
static BUFFER data;
/** Index of buffer that will hold data. */
static GLuint buffer;
/** Used for synchronizing reads and writes. */
static GLsync fence;
/** Whether buffer needs to be updated or not. */
static bool update = false;

void uniform_init(void) {
	/* Create storage for buffer, set to binding point 0. */
	glGenBuffers(1, &buffer);
	glBindBuffer(GL_UNIFORM_BUFFER, buffer);
	glBindBufferBase(GL_UNIFORM_BUFFER, 0, buffer);
	glBufferData(GL_UNIFORM_BUFFER, sizeof(BUFFER), &data, GL_DYNAMIC_DRAW);
	fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
}

void uniform_quit(void) {
	glDeleteSync(fence);
	glDeleteBuffers(1, &buffer);
}

void uniform_update(void) {
	if (update) {
		GLboolean success;
		GLvoid* p;

		/* Flush buffer struct to GPU. */
		glBindBuffer(GL_UNIFORM_BUFFER, buffer);
		glBufferData(GL_UNIFORM_BUFFER, sizeof(BUFFER), &data, GL_DYNAMIC_DRAW);
		p = glMapBufferRange(	GL_UNIFORM_BUFFER, 0, sizeof(BUFFER),
								GL_MAP_WRITE_BIT | GL_MAP_FLUSH_EXPLICIT_BIT | GL_MAP_UNSYNCHRONIZED_BIT);
		assert(p);
		glClientWaitSync(fence, GL_SYNC_FLUSH_COMMANDS_BIT, GL_TIMEOUT_IGNORED);
		memcpy(p, &data, sizeof(BUFFER));
		glFlushMappedBufferRange(GL_UNIFORM_BUFFER, 0, sizeof(BUFFER));
		success = glUnmapBuffer(GL_UNIFORM_BUFFER);
		assert(success);

		/* Set up fence for next frame and signal updates as finished. */
		fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
		update = false;
	}
}

void uniform_set_resolution(float vec2[2]) {
	memcpy(data.resolution, vec2, sizeof(float[2]));
	update = true;
}

void uniform_set_ortho(float mat3[9]) {
	memcpy(data.ortho_matrix, mat3, sizeof(float[9]));
	update = true;
}

I debugged uniform_update and I’m positive that the data is in my struct at that point. The syncing method is one I learned here and I think it’s working because none of these assertions are failing and I used the same method to render my vertex data:

void tarray_draw(TARRAY *me, int shader) {
	GLboolean success;
	void *data;

	/* Flush vertex data to GPU. */
	glBindVertexArray(me->array);
	glBindBuffer(GL_ARRAY_BUFFER, me->buffer);
	data = glMapBufferRange(GL_ARRAY_BUFFER, 0, sizeof(VERTEX) * me->size,
							GL_MAP_WRITE_BIT | GL_MAP_FLUSH_EXPLICIT_BIT | GL_MAP_UNSYNCHRONIZED_BIT);
	assert(data);
	glClientWaitSync(me->fence, GL_SYNC_FLUSH_COMMANDS_BIT, GL_TIMEOUT_IGNORED);
	memcpy(data, me->vert, sizeof(VERTEX) * me->size);
	glFlushMappedBufferRange(GL_ARRAY_BUFFER, 0, sizeof(VERTEX) * me->size);
	success = glUnmapBuffer(GL_ARRAY_BUFFER);
	assert(success);

	/* Signal shader how to interpret vertex data. */
	glVertexAttribPointer(0, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)0);
	glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, 4 * sizeof(float), (void*)(2 * sizeof(float)));
	glEnableVertexAttribArray(0);
	glEnableVertexAttribArray(1);

	/* Set current active texture. */
	glActiveTexture(GL_TEXTURE0);
	glBindTexture(GL_TEXTURE_2D, me->sampler);
	shader_1i(shader, "sampler", 0);

	/* Render contents of buffer. */
	shader_use(shader);
	glDrawArrays(GL_TRIANGLES, 0, 6 * me->size);

	/* Set up sync for next frame. */
	me->fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
}

Am I missing any lines of code in any of these code snippets?

Okay, I found the problem! I suspected the std140 layout was messing with my ortho_matrix, so I modified the uniform buffer of my shader to have a float temp1 I could use for testing:

#version 330 core

layout(std140) uniform ubuffer {
	float temp1;
};

uniform mat3 ortho_matrix;

layout(location = 0) in vec2 position;
layout(location = 1) in vec2 sampler_uv;

out vec2 pixel;
out vec2 uv;

void main() {
	gl_Position = vec4(vec3(position.x + temp1, position.y, 1.0) * ortho_matrix, 1.0);

	pixel = position.xy;
	uv = sampler_uv;
}

I modified temp1 on the client-side of the code and was successfully able to offset my position. So that meant I was actually doing my function calls correctly. The problem is that the placeholder float resolution[2] was being optimized out so I think that was changing the alignment of my uniform buffer. On top of that, the std140 layout demands that each row of my mat3 be 4 floats, not 3. So on the client-side of things, I modified my ortho_matrix to be:

static float ortho_matrix[] = {
	1.0f,	0.0f,	-1.0f,	0.0f,
	0.0f,	1.0f,	1.0f,	0.0f,
	0.0f,	0.0f,	1.0f,	0.0f
};

With both of these modifications I got the shader working smoothly! Gotta watch out for that std140 layout.