Problems with recording coordinates from Transform Feedback

Hello there,

I have been trying to record geometry shader output coordinates with transform feedback but I cannot seem to make it work.

The problem is that shader program outputs location = -1 error on uniforms in shader class which are in geometry shaders, But on control test without Transform feedback my class for shader class seems to allocate all my uniforms correctly.
The other possible problem is that since I have been setting transform feedback by the book in open.gl/feedback I may have not set up my geometry shader properly.

main cpp transform feedback in game loop


//Camera
		glm::mat4 ortho = glm::ortho(-20.f, 20.f, -20.f, 20.f);
		glm::mat4 translate = glm::translate(glm::mat4(1.0f), campos);
		glm::mat4 rotate = glm::rotate(glm::mat4(1.0f), CAMangle, glm::vec3(0.0, 0.0, 1.0));
		glm::mat4 scale = glm::scale(glm::mat4(1.0f), glm::vec3(zoom, zoom, 0));
		glm::mat4 unionmatrix = ortho * translate * rotate * scale; //lumping together all needed transformation matrices
		// Coloring

		transform.engage(); //using  shading program



		glUniformMatrix4fv(geom.Uni("mvp"), 1, GL_FALSE, glm::value_ptr(unionmatrix));
		glUniform1f(geom.Uni("angle"), bangle);
		glUniform1f(geom.Uni("multiplier"), multiplier);


		glEnable(GL_RASTERIZER_DISCARD);		

		glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER, 0, TBO);
		glBeginTransformFeedback(GL_LINES);
		glDrawArrays(GL_LINE_STRIP, 0, 4);
		glEndTransformFeedback();
		
		glDisable(GL_RASTERIZER_DISCARD);
		
		glFlush();



		Window.update();

	
	GLdouble feedback[4]; //   hence we need both branches x and y coordinates so array equals 4
		glGetBufferSubData(GL_TRANSFORM_FEEDBACK_BUFFER, 0, sizeof(feedback), feedback);


		for (int i = 0; i < 4; i++) {
			printf("%f
", feedback[i]);
		}

main.cpp transform feedback outside of game loop

      

Shader vertex(GL_VERTEX_SHADER, "Source/Fractals/TREE/Shaders/fractal.vert");
Shader geometry(GL_GEOMETRY_SHADER, "Source/Fractals/TREE/Shaders/fractal.geom");



	ShaderProgram transform;
	transform.attachshader(vertex);

	transform.attachshader(geometry);
	
	const char* varyings[] =
	
	{
		"leftbranch",
		"rightbranch"
	};	
	glTransformFeedbackVaryings(transform.getID(), 2, varyings, GL_INTERLEAVED_ATTRIBS);

	Uniform geom(transform);
	geom.LocUni("angle");
	geom.LocUni("multiplier");
	geom.LocUni("mvp");

	transform.linkProgram();



	GLuint VAO;
	glGenVertexArrays(1, &VAO);
	glBindVertexArray(VAO);


			GLdouble trunk[]
		{
			stemstart.x, stemstart.y,
			stemend.x , stemend.y,
		};


	GLuint VBO;

		glGenBuffers(1, &VBO);
		glBindBuffer(GL_ARRAY_BUFFER, VBO);
		glBufferData(GL_ARRAY_BUFFER,2*sizeof(trunk), trunk, GL_DYNAMIC_DRAW);
		glVertexAttribPointer(0, 2, GL_DOUBLE, GL_FALSE, 0, 0);
		glEnableVertexAttribArray(0);

		//Shader fragmentTransform(GL_VERTEX_SHADER, "Source/Fractals/TREE/Shaders/tree.frag")


	
	
	GLuint TBO;
	glGenBuffers(1, &TBO);
	glBindBuffer(GL_ARRAY_BUFFER, TBO);
	glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(trunk), nullptr, GL_DYNAMIC_READ);

vertex shader.


#version 450 core


layout(location = 0)in vec2 position;
out vec2 pos;

void main()
{
	
	//gl_Position = vec4(position,0.0,1.0);
	pos = position;
}

geometry shader.


#version 450 core



layout(lines) in;

layout(line_strip ,max_vertices=4) out;
in vec2 pos[2];
out dvec2 leftbranch;
out dvec2 rightbranch;

uniform float angle;// angle between midbranch
uniform float multiplier; //branch multiplier;
uniform mat4 mvp = mat4(1.0); ///matrices

vec2 midbranch = (pos[1] - pos[0]) * multiplier; //calculating the the middle branch endpoint from input line's beggining point and ending point;
void main()
{	
	

	
	//Defining rightbranch beggining point(vertex)
	pos[1];
	EmitVertex();
	//Calculation and Emission of rightbranch's ending point(vertex)
	rightbranch.x = mvp*(midbranch.x*cos(radians(angle)) + midbranch.y*sin(radians(angle)));
	rightbranch.y = mvp*(-midbranch.x*sin(radians(angle)) + midbranch.y*cos(radians(angle)));
	rightbranch + pos[1];
	EmitVertex();
	EndPrimitive(); //End of emmision of the branch;
	//Defining leftbranch beggining point;
	pos[1];
	EmitVertex();
	//Calculation and Emission of rightbranch's ending point(vertex)
	leftbranch.x = mvp*(midbranch.x*cos(radians(-angle)) + midbranch.y*sin(radians(-angle)));
	leftbranch.y = mvp*(-midbranch.x*sin(radians(-angle)) + midbranch.y*cos(radians(-angle)));
	rightbranch + pos[1];
	EmitVertex();
	EndPrimitive();
}

The next goal in this program is to make fractal tree through multiple passes in transform feedback in game loop using the coordinates retrieved form Transform feedback.

Are the uniforms in question still active when you don’t have a fragment shader? Uniforms whose values aren’t required to compute any output will be eliminated from the shader program.

Also, if the glGetUniformLocation() call is in Shader::LocUni(), you’re calling that before ShaderProgram::linkProgram(), but uniform locations aren’t assigned (and can’t be queried) until the program is linked.

I have seem to made allocate uniforms by your method, but I still can not get correct data from transform feedback, maybe you know why ??.
You also said that “Uniforms whose values aren’t required to compute any output will be eliminated from the shader program”.
The problem is that It is my first time using Transform Feedback in such uncommon way and programming shaders without gl_Position is quite confusing.
Confusing it is because with gl_Position I can draw 2 branches on the screen with geo and frag shader, but with Transform Feedback it seems that gl_Position breaks Transform Feedback recording for me.
Is there a possible way to output data to screen without homogenous built-in GLSL coordinates ?

gl_Position is required for rendering. The rasterisation process cannot determine which fragments the primitive covers without gl_Position.

Originally posted by CGelements
gl_Position is required for rendering. The rasterisation process cannot determine which fragments the primitive covers without gl_Position.

Well damn.
I am so stumped because once i change rightbranch + pos[1]; to gl_Position=rightbranch + pos[1]; and vice versa LocUni does not locate my uniforms in geometry shader and transform feedback does not calculate coordinates of rightbranch and leftbranch properly.
Maybe I should enable fragment shader but what should i do with leftbranch and rightbranch ? Because they will be outputed right to the fragment shader.

Are those variables still being marked for transform feedback capture? Is gl_Position marked for transform feedback capture?

Also: the “rightbranch + pos[1];” statements are pointless, as they have no side effects. They will just be eliminated.

Sorry for questioning a probably a stupid question but how can I mark gl_Position for transform feedback capture?

And as I taught that my current geometry shader statement could be pointless because when changing to gl_Position= statement in control test without transform feedback It works perfectly;

With glTransformFeedbackVaryings(), the same as any other output variable.

Sorry for replying after a long time because of redirection of opengl forums, but.

I have been trying to make Transform Feedback to record coordinates from geometry shader with methods you have been giving and,

When i marked gl_Position for capture in glTransformFeedbackVaryings(), It fails to link the shader program, and I have been trying to fix this problem ever since.

I have tried to make fragment shader in correspondence to your posts, but I got the problem.
Since geometry shader is right behind the fragment shader this means all the output of the geometry shader goes right to the fragment shader, for now I have been trying to draw “black” branches, but what should I do with marked for recording compulsory coordinates from geometry shader: leftbranch and rightbranch ?

tree.frag

#version 450 core

in vec2 leftbranch, rightbranch;
out vec4 color1;

void main()
{
	//defining color (black)
	color1 = vec4(0.0,0.0,0.0,1.0);
	

	


}

new fractal.geom

#version 450 core



layout(lines) in;

layout(line_strip ,max_vertices=4) out;
in vec2 pos[2];
out vec2 leftbranch;
out vec2 rightbranch;

uniform float angle;// angle between midbranch
uniform float multiplier; //branch multiplier;
uniform mat4 mvp = mat4(1.0); ///matrices for rotation, projection, scaling and such..

vec2 midbranch = (pos[1] - pos[0]) * multiplier; //calculating the the middle branch endpoint from input line's beggining point and ending point;
void main()
{	
	

	
	//Defining rightbranch beggining point(vertex)
	gl_Position = vec4(pos[1],0.0,1.0);
	//Calculation and Emission of rightbranch's ending point(vertex)
	rightbranch.x = mvp*(midbranch.x*cos(radians(angle)) + midbranch.y*sin(radians(angle)));
	rightbranch.y = mvp*(-midbranch.x*sin(radians(angle)) + midbranch.y*cos(radians(angle)));
	gl_Position = vec4(rightbranch + pos[1],0.0,1.0);
	EmitVertex();
	EndPrimitive(); //End of emmision of the branch;
	//Defining leftbranch beggining point;

	gl_Position = vec4(pos[1],0.0,1.0);
	//Calculation and Emission of rightbranch's ending point(vertex)
	leftbranch.x = mvp*(midbranch.x*cos(radians(-angle)) + midbranch.y*sin(radians(-angle)));
	leftbranch.y = mvp*(-midbranch.x*sin(radians(-angle)) + midbranch.y*cos(radians(-angle)));
	gl_Position = vec4(rightbranch + pos[1],0.0,1.0);
	EmitVertex();
	EndPrimitive(); //ending the emmision of a branch


	


	
}

If you’re going to emit line strips, you should emit at least 2 vertices per primitive. Try adding an EmitVertex() after all the other two places you set gl_Position without calling it.

Also, in the lines where you are multiplying by mvp. You’re multiplying a matrix by a float and assigning this to a float. This doesn’t make sense. You typically want to multiply a matrix by a vector to get a vector (or a matrix by a matrix to get a matrix).

Are you checking for GLSL compile errors? This shader doesn’t even compile, much less link with other shaders into a program, due to the float = matrix * float error I just mentioned:

tst2.geom(25) : error C7011: implicit cast from "mat4" to "float"
tst2.geom(26) : error C7011: implicit cast from "mat4" to "float"
tst2.geom(34) : error C7011: implicit cast from "mat4" to "float"
tst2.geom(35) : error C7011: implicit cast from "mat4" to "float"

If you remove the mvp * prefixes from those 4 expressions it appears in (and add the 2 extra EmitVertex() calls I mentioned) you’ll get a shader that will at least compile.

Also, @GClements has already hit on this, but I don’t think you want to capture the leftbranch and rightbranch values in your transform feedback output, but rather the gl_Position values. Assuming so, update your glTransformFeedbackVaryings() statement appropriately (i.e. capture only "gl_Position"), and get rid of the out in front of the leftbranch and rightbranch decls.

If you instead wanted to capture leftbranch and rightbranch though, you’d need to set them both to reasonable values for each emitted vertex, which you’re not doing.

Thank you, but still there is quite a problem.

Even when capturing only gl_Position and shaderprogram links all shaders ,with uniforms correctly, I still do not get the desirable output from my Transform Feedback Buffer.

The not desirable output is being 0 value.

Since my input consists of 2 vec2 coordinates which are:

glm::vec2 stemstart;
	glm::vec2 stemend;

	stemstart.x =0.f;
	stemstart.y =0.f;

	stemend.x = 0.f;
	stemend.y = 5.0;

Technically it should not output 0.f or MAX_FLOAT and values in most case.
What i really want it to do is transform these input values to make 2 lines whose endpoint calculation go through a trigonometric formula.

In this case I except to get from my transform feedback 4 vec2 values.

	glm::vec2 feedback[4];
		glGetBufferSubData(GL_TRANSFORM_FEEDBACK_BUFFER, 0, sizeof(feedback), feedback);


		for (int i = 0; i < 4; i++) {
			std::cout << feedback[i].x << " " << feedback[i].y << std::endl;
		}

From which i except to get:
feedback[1] and feedback[3] = stemend.x and .y;
feedback[2] and feedback[4] = rightbranch.x and y. and in the latter leftbranch.x and .y values; which all are subject to change by user defined input.

redone fractal.geom

#version 450 core



layout(lines) in;

layout(line_strip ,max_vertices=4) out;
in vec2 pos[2];


uniform float angle;// angle between midbranch
uniform float multiplier; //branch multiplier;
uniform mat4 mvp = mat4(1.0); ///matrices for rotation and such..

vec2 midbranch = (pos[1] - pos[0]) * multiplier; //calculating the the middle branch endpoint from input line's beggining point and ending point;
void main()
{	
vec2 leftbranch;
vec2 rightbranch;


	
	//Defining rightbranch beggining point(vertex)
	gl_Position = mvp*vec4(pos[1],0.0,1.0);
	EmitVertex();
	//Calculation and Emission of rightbranch's ending point(vertex)
	rightbranch.x = midbranch.x*cos(radians(angle)) + midbranch.y*sin(radians(angle));
	rightbranch.y = -midbranch.x*sin(radians(angle)) + midbranch.y*cos(radians(angle));
	gl_Position = mvp*vec4(rightbranch + pos[1],0.0,1.0);
	EmitVertex();
	//Defining leftbranch beggining point;

	gl_Position = mvp*vec4(pos[1],0.0,1.0);
	EmitVertex();
	//Calculation and Emission of rightbranch's ending point(vertex)
	leftbranch.x = midbranch.x*cos(radians(-angle)) + midbranch.y*sin(radians(-angle));
	leftbranch.y = -midbranch.x*sin(radians(-angle)) + midbranch.y*cos(radians(-angle));
	gl_Position = mvp*vec4(rightbranch + pos[1],0.0,1.0);
	EmitVertex();
	EndPrimitive();


	


	
}

I tried making it without multiplying mvp but the results are still the same…

Ok, well getting the math the way you want is up to you. But let’s get drop back and get you a geometry shader that produces “predictable” output. You can then build on it however you want.

Save off your existing geometry shader, and replace it with this:

#version 420 core

layout(lines) in;

layout(line_strip ,max_vertices=4) out;

out vec2 myPos;

void main()
{
    myPos = vec2( 1.0, 2.0 );
    EmitVertex();

    myPos = vec2( 3.0, 4.0 );
    EmitVertex();
    EndPrimitive();

    myPos = vec2( 5.0, 6.0 );
    EmitVertex();

    myPos = vec2( 7.0, 8.0 );
    EmitVertex();
    EndPrimitive();
}

First-off, above you said you wanted to output vec2 positions, not vec4 positions (as gl_Position outputs). So I’ve changed your shader to output a vec2, myPos, instead of a vec4. Change your glTransformFeedbackVaryings() statement to capture this output instead.

Now when you readback the output of the transform feeback buffer, if you have everything else setup properly, you should see each invocation generate 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0. Make sure that you get that.

Once you have this working, then you can complicate this with more math, inputs, and uniforms anyway you like.

1 Like

Thank you, with this piece of code I was able to analyze how transform feedback records coordinates from geometry shader and i was able to record the coordinates I wanted with the piece of this code:

layout(lines) in;

layout(line_strip ,max_vertices=4) out;


uniform float angle;// angle between midbranch
uniform float multiplier; //branch multiplier;
uniform mat4 mvp = mat4(1.0); ///matrices for rotation and such..

out vec2 branchpos ;

 //calculating the the middle branch endpoint from input line's beggining point and ending point;
void main()
{	
vec2 midbranch; 
midbranch.x = (gl_in[1].gl_Position.x - gl_in[0].gl_Position.x) * multiplier;
midbranch.y = (gl_in[1].gl_Position.y - gl_in[0].gl_Position.y) * multiplier;
vec2 stemend; // just making it easier for myself to define stem end withhout using separate elements constantly of gl_in.gl_Position
stemend.x = gl_in[1].gl_Position.x;
stemend.y = gl_in[1].gl_Position.y;
vec2 leftbranch;
vec2 rightbranch;


	
	//Defining rightbranch beggining point(vertex)
	branchpos = stemend;
	EmitVertex();
	//Calculation and Emission of rightbranch's ending point(vertex)
	rightbranch.x = midbranch.x*cos(radians(angle)) + midbranch.y*sin(radians(angle));
	rightbranch.y = -midbranch.x*sin(radians(angle)) + midbranch.y*cos(radians(angle));
	branchpos = rightbranch + stemend;
	EmitVertex();
	EndPrimitive();
	//Defining leftbranch beggining point;

	branchpos = stemend;
	EmitVertex();
	//Calculation and Emission of rightbranch's ending point(vertex)
	leftbranch.x = midbranch.x*cos(radians(-angle)) + midbranch.y*sin(radians(-angle));
	leftbranch.y = -midbranch.x*sin(radians(-angle)) + midbranch.y*cos(radians(-angle));
	branchpos = leftbranch + stemend;
	EmitVertex();
	EndPrimitive();

	//Trying to draw transformed vertices.
	gl_Position = mvp*vec4(stemend,0.0,1.0);
	EmitVertex();
	gl_Position = mvp*vec4(stemend + rightbranch, 0.0,1.0);
	EmitVertex();
	EndPrimitive();

	gl_Position = mvp*vec4(stemend,0.0,1.0);
	EmitVertex();
	gl_Position = mvp*vec4(stemend + leftbranch, 0.0,1.0);
	EmitVertex();
	EndPrimitive();		
}

And well as I am still a newbie in Modern OpenGL, I still have other problems to solve and questions to ask about transform feedback function (since the documentation on that function is quite scarce).For example how can I render my transformed vertices from transform feedback. But that is going to be left for another thread because it will go out of the topic.

Good Bye and thanks for all support, @Dark_Photon and @GClements.

Great!

Also, before you go spending even more time with transform feedback, you should be aware that the performance of geometry shader-based solutions is fairly poor in many cases. If you’re doing something small-scale with it (like culling a few hundred point features with it; or transforming a small number of points to quads), it may be fine for you needs. But if you try to scale it up to something on the order of all the primitives in your scene, the performance will very likely underwhelm you. There are good reasons for this but I won’t bore you with them.

Thank you for quite reasonable advice about the performance of Geometry Shaders,

But I had been questioning whether I should use Vertex Shader or Geometry shader to calculate endpoints of branches for my project ever since I knew that Geometry Shader can be quite slow in a task that I am giving it.

The most problematic decision making for driver shader, for me was that my object is a fractal object, fractal tree if being exact, which branches out theoretically ad infinitum or practically until it gets to threshold or n.

Python version of said tree.

Hence the tree branching requires an output of 1:x; while x is more than 1. I had to choose Geometry Shader because in Vertex Shader documentation it states that the output has to be 1:1 and no more, plus It executes per vertex which is bad because I need to choose variable from the given input data.

Maybe I am missing something ?

Sorry for being off-topic.

Best Regards StabberKnight.

No. If you want to generate geometry on the GPU, you need to use a geometry shader. A vertex shader cannot create or discard primitives.

But there are other ways to generate such fractals. E.g. render the base level to a texture, render multiple transformed copies of the texture to a new texture, iterate that process for the desired number of levels.

Also, bear in mind that the recursion doesn’t have to strictly follow the fractal’s structure; you can coalesce multiple levels. E.g if the fractal is X=AX+BX, then you can generate two levels at a time using X=A(AX+BX)+B(AX+BX)=AAX+ABX+BAX+BBX. Moreover, if the various copies which make up the whole have different scale factors, you can vary the number of levels merged so that the composite transformations all have roughly the same scale factor. In turn, you may then be able to eliminate the geometry shader and use instancing for most of the work (and use the CPU to generate the geometry for the top-most levels).

Well, “technically” this is correct. 1 set of input data → 1 set of output data.

Q: But what’s a “set of data” here? A: Anything you want it to be! …subject to certain limitations. In this case, up to 16 vectors accepted as input, and 15-vectors-worth accepted as output.

Typically you’d use all of the output vectors to represent data for 1 vertex. But who says you have to? If you split them across 2 vertices and you can generate 2 vertices output. Split them across 4 and you can generate 4 vertices output. Capture them into a buffer object with Transform Feedback, and the pipeline is none-the-wiser that you just used the vertex shader to perform a 1:N operation (where N is small).

Your branching factor is 2, so you could reformulate your generation in terms of a vertex shader which accepts 1 point as input and generates 2 points as output. Something like this:

#version 330

in  vec3 posIn;
out vec3 pos1Out;
out vec3 pos2Out;

void main(void)
{
  vec3 offsetLeft = ..., offsetRight = ...;

  pos1Out = posIn + offsetLeft;
  pos2Out = posIn + offsetRight;
};

Capture that output with Transform Feedback and you’ve got your positions. Pair that with an index list that’s precalculated to provide the connections between vertices, and render your lines!

Note that I mention this because it’s possible, and it gives you what you want, generated on the GPU, without using a geometry shader. I’m not trying to say that this is the best choice for performance. You should stack these methods up against others like 1) full CPU-side generation, or 2) generating all this with a compute shader instead.

1 Like

I can’t imagine that the frequent FBO bindings needed to pull that off will be faster than even the notoriously slow Geometry Shaders. I’d say use a compute shader.