Transform Feedback

Hi all,

I’m trying to get GL_EXT_transform_feedback extension work in order to perform hardware skinning effectively. The problem appeared within the function glTransformFeedbackVaryingsEXT.

As described in OpenGL spec this funtion takes an array of strings representing varying names:
glTransformFeedbackVaryingsEXT (GLuint program, GLsizei count, const char ** varyings, GLenum bufferMode);

After reading it I considered this approach to be simple and easy. But in the actual glext.h header downloaded from this site ( the prototype for glTransformFeedbackVaryingsEXT takes an array of integers representing varying locations:
glTransformFeedbackVaryingsEXT (GLuint program, GLsizei count, const GLint *locations, GLenum bufferMode);

The same prototype can be found in GLee. I’m not against using locations, but where can I get them? As far as I know, there is just a function glGetVaryingLocationNV, but what about general case (ATI, others)?

I’ll be grateful if anybody points me to the working example of GL_EXT_transform_feedback. Thanks in advance.

Make sure you have the latest headers (updated recently).

Haven’t used this extension yet myself but it seems that TF is part of the core in 3.0 and available right out of the box on a NV G80 or better with the latest drivers (182.08 by my reckoning).

Of course I mean latest headers. I check almost every day.

Haven’t used this extension yet myself but it seems that TF is part of the core in 3.0 and available right out of the box on a NV G80 or better with the latest drivers (182.08 by my reckoning).

The NV support is obvious as there is a complete GL_NV_transform_feedback extension. I’m actually talking about GL_EXT_* version.

With new OpenGL 3.1 specification the glext.h header is correct and matches the spec. However, I still can’t get transform feedback work. The code:

prog_id = glCreateProgramObject();
glAttachObject(prog_id, prog_vert);
glAttachObject(prog_id, prog_frag);

const char *vars[] = { “gl_Position” };
const unsigned ids[] = { buf_dst_id };

glTransformFeedbackVaryings(prog_id, 1, vars, GL_SEPARATE_ATTRIBS_EXT);
glBindBufferBase(GL_TRANSFORM_FEEDBACK_BUFFER_EXT, 0, ids[0]);

//!!! here get an error:
//!!! access violation


glBindBuffer(GL_ARRAY_BUFFER, buf_src_id);
glVertexPointer(3, GL_FLOAT, 0, NULL);

glDrawArrays(GL_POINTS, 0, num_vertexes);

have you check if this EXTENSION is enabled in your platform and current driver? you can use GLEW or GLEE library to easily your process.

Yes, the extension ‘GL_EXT_transform_feedback’ is supported.
I have a Radeon 2600HD video on board controlled by ATI Catalyst 9.3 driver.

I am quite certain transform feedback is not working in current ati drivers (I’m running some kind of catalyst 9.4 pre-release bundled with ubuntu 9.04RC on linux x86_64).

Whenever you call glTransformFeedbackVaryings with invalid varying names (e.g. a name not defined as an out variable in your vertex shader), the driver will segfault in glLinkProgram (which is the point when these settings are supposed to take effect).
In addition, build-in variables (such as gl_Position) are not accepted for feedback.

Obviously this bug can be worked around by using an user defined out variable for feedback.

But I still couldn’t get transform feedback to work up to now, because I’m unable to bind a feedback buffer:

  • glBindBufferBase results in an GL_INVALID_OPERATION
  • glBindBufferOffset and glBindBufferRange cause a segfault

Has anybody had success in setting up transform feedback with ATIs opengl 3.0 drivers ?

Does anybody know which is the prefered way to report bugs to ati (I had no luck searching their website) ?

Thank you, Seegel, for informative post.
Sorry for being so late with an answer - I just didn’t have much to say until now.

I gave up trying to get it work in C++ with GL 2.1
Now I’m using Boo(.net) with OpenTK creating pure GL 3.0 context window.
And I faced the same problem you did…

If you ever get an idea how to solve it - let me know, please. I can’t live without skinning performed on GPU using TF :slight_smile:

I can’t live without skinning performed on GPU

Use textures, FBO (with MRT), PBO and VBO.

Store all your vertices in rgb32f texture. Store additional attributes in other textures (indices, weights, normals, binormals, tangents).

Store bone matrices in rgba32f texture. Make sure indices in texture points to correct bone texel.

Then render aligned quad with shader. Result should be stored in FBO with MRT. After that, readback result to PBO buffer, rebind PBO as VBO, setup pointers and render.

You can store multiple characters in one texture and perform skinning on all of them in single rendering pass.

Thanks, yooyo

I’ll consider using textures for skinning.
However, TF approach would be more natural (no need of PBO, MRT, all attribs are just VBO) and fast, so a working example is still needed…

Edit: current TF status:
upgraded to Catalyst 9.6
BindBufferBase still produces InvalidOperation error

BindBufferRange produces segfault as reported by S.Seegel
Seems like an old driver issue still presents in Catalyst

I’ve made a small test example. The results are:
For ATI Catalyst 9.5: InvalidOperation on BindBufferBase call
For Nvidia 180.xx WHQL: (will be posted a little bit later)

If there is someone from ATI/Nvidia here, please, tell me if I’m doing something wrong or it’s just your drivers bugs.

#include "glew.h"
#include "glfw.h"
#include <stdio.h>
#include <assert.h>

#pragma comment(lib,"GLFW.lib")
#pragma comment(lib,"glew32.lib")
#pragma comment(lib,"opengl32.lib")

static const char* const sourceVert = "varying vec4 out_pos;		\
	void main()	{ gl_Position = out_pos = gl_Vertex + vec4(0.5); }	\
static const char* const sourceFrag = "varying vec4 out_pos;	\
	void main()	{}";

static void checkShader(const GLuint id, const GLenum type, const char str[])	{
	static char msg[240];
	int size,aux;
	glGetObjectParameterivARB(id, type, &size);
	if(!size) return;
	glGetObjectParameterivARB(id, GL_OBJECT_INFO_LOG_LENGTH_ARB, &size);
	assert(size < sizeof(msg));
	glGetInfoLogARB(id, size, &aux, msg);
	msg[size] = 0;
	printf("%s: %s", str, msg);

int main()	{
	int ok = glfwOpenWindow(400, 300, 8,8,8, 0,0,0, GLFW_WINDOW);
	if(!ok) return -1;
	GLenum err = glewInit();
	if(err != GLEW_OK) return -2;
	printf("Context initialized
	if(!GLEW_EXT_transform_feedback || !GLEW_ARB_shader_objects || !GLEW_ARB_vertex_buffer_object)
		return -3;
	printf("Extensions supported
	GLuint shid, bid, tmp;

	shid = glCreateProgramObjectARB();
	tmp = glCreateShaderObjectARB(GL_VERTEX_SHADER_ARB);
	glShaderSourceARB(tmp, 1, (const GLcharARB**)&sourceVert, NULL);
	checkShader(tmp, GL_OBJECT_COMPILE_STATUS_ARB, "Compile vertex");

	tmp = glCreateShaderObjectARB(GL_FRAGMENT_SHADER_ARB);
	glShaderSourceARB(tmp, 1, (const GLcharARB**)&sourceFrag, NULL);
	checkShader(tmp, GL_OBJECT_COMPILE_STATUS_ARB, "Compile fragment");

	const char *vars[] = {"out_pos"};
	glTransformFeedbackVaryingsEXT(shid, 1, vars, GL_SEPARATE_ATTRIBS_EXT);
	checkShader(shid, GL_OBJECT_LINK_STATUS_ARB, "Link result");
	printf("Shader loaded: %d
", glGetError());

	glGenBuffersARB(1, &bid);
	glBindBufferARB(GL_ARRAY_BUFFER, bid);
	const float init[] = {0.1f,0.2f,0.3f,0.4f};
	glBufferDataARB(GL_ARRAY_BUFFER, 4*sizeof(float)*1, init, GL_STREAM_DRAW);
	printf("Buffer prepared: %d
", glGetError());

	printf("Feedback assigned: %d
", glGetError());

	printf("Drawing completed: %d
", glGetError());

	float *const data = (float *)glMapBuffer(GL_ARRAY_BUFFER, GL_READ_ONLY);
	for(int i=0; data && i!=4; ++i)
		printf("%.1f ", data[i]);
Data readed: %d
", glGetError());
	return 0;

Try changing this




since you’re only binding one varying for transform feedback. The index parameter starts at 0.

Thanks for reading the code, GHotep.
It was a simple mistake. Fixing it doesn’t help. The same error 1282 (Invalid Operation).

Hmm. I don’t see anything else obviously wrong - one guess, try adding a #version 120 or #version 130 as the first line of the shader. Another possibility, try binding gl_Position instead of out_pos, in case the latter is being eliminated during optimization.

  1. Tried to specify the version. 120 works in the same way as there’s no version. 130 complains that version can’t be specified in GL2 context…

  2. Binding gl_Position results in the access violation crush on glLinkProgram. Looks like a driver bug for me…

Any ideas what to do next?

I’m just guessing at this point - I know I’ve used transform feedback recently on Nvidia hardware, but I don’t have access to ATI cards. Maybe add

out_Color = out_pos;

to the fragment shader, in case the varying is getting optimized away. I agree that binding gl_Position shouldn’t be causing a crash, since that’s what I’ve done in the recent past.

The last tests were performed with the following fragment shader:

#version 120
in vec4 out_pos;
void main()	{
	gl_FragColor = out_pos;

The result is the same… AFAIK, driver doesn’t optimize across shader objects.

Did you try EXT_Transform_Feedback on NVidia or you just used NV-version of the extension? I’m not aware of any example of EXT version.

I’m writing currently in OpenGL 3.1, which promoted EXT_transform_feedback to the core with not too many changes. I’m at a bit of a loss with providing additional help - it may well be driver bugs, or there is something subtle that I am overlooking.

So my question is: are you using Transform Feedback with GL 3.1? If you are and you don’t see any mistakes in my code, could you please some working code of yours?

I can’t say I have tried transform feedback - but have you enabled the extensions in your shaders ?

for example

#extension GL_EXT_texture_array : enable