Request: tweak transform feedback

The following request is quite minor:

Tweak transform feedback so that:
(1) after linking an application can query GL what are the active varyings

(2) ability to change what is recorded in transform feedback without relinking.

Note that this functionality is already present in NV_transform_feedback.

About (1):

After attaching the shaders and before linking we have to define the transform feedback varyings using glTransformFeedbackVaryings. In order to use glTransformFeedbackVaryings we have to know the names of the varyings.

I may be wrong but I believe that there is no point on querying the driver for their names. We already know them the time we use glTransformFeedbackVaryings.

Yes but you don’t have to setup feedback buffers for nothing if the varying are not active. I guess that way, we know is a buffer should be use or not.

First time I saw an advantage of glTransformFeedbackVaryings against glTransformFeedbackVaryingsNV… expect that is can’t be used.

And when the varyings are not active? Can you give an example?

The typical case is when the fragment shader does not use them. In that case, the GL driver will typically work some magic to not bother computing those varying in the vertex (or geometry) shader.

At any rate being able to query the active varyings along with changing what is captured by transform feedback without relinking is a worth while, as one may wish to capture just some of the varying…

Though maybe if and when separate shader objects finally come to core, the transform feedback will, I’d imagine, be reworked some too.

I dont believe that this applies in the case of transform feedback.

Normally if you have a varying in the vertex shader and in the fragment shader you have NO reference of this varying at all, the linker will probably optimize it and discard it. BUT in transform feedback thats not the case. If you choose to use a varying for transform feedback then the linker will automatically make it active and it will exclude it from any optimizations.

In the bellow example “position” cannot be inactive varying.

C/C++:

const char* varsArr [] = {"position"};
glTransformFeedbackVaryings(glId, 1, varsArr, GL_SEPARATE_ATTRIBS);

Vertex shader:

attribute vec3 position;

varying vec3 outPosition;


void main()
{	
	outPosition = position * 2.0;
}

Fragment shader

void main()
{	
}

Absolutely spot on you are.

In NV_transform_feedback there is the function glActiveVaryingNV(GLuint program, const char *name) which makes it so that the named varying cannot optimized out. This must be called before linking, as expected. For GL3 style transform feedback, all varying to be outputted to a buffer are decided upon before linking and unchanging, so glTransformFeedbackVaryings implicitly also does the beans of glActiveVarying.

Thing is that goes through my head: why is the GL3 spec the way it is? The NV spec gives better flexibility and fewer surprises.

Thing is that goes through my head: why is the GL3 spec the way it is? The NV spec gives better flexibility and fewer surprises.

Because it’s more consistent with the way that GLSL already works. You can’t redefine attributes after linking, nor can you redefine fragment shader outputs. So why would you be able to redefine transform feedback variables?

That has got to be the biggest can of BS I have ever heard. The use cases for changing which varying to capture makes a lot of sense for the interleaved case. Additionally, considering that the NV specification has this and that works quite well with the rest of GL3, that line of BS looks even worse.

The use cases for changing which varying to capture makes a lot of sense for the interleaved case.

And there are use cases for changing what attributes map to what indices after linking too. Or for changing what fragment shaders are attached to what vertex shaders after linking. Etc.

You may not like it. I may not like it. But it is consistent with how the ARB has developed GLSL: these kinds of things are baked into the object at link time and remain fixed from that point on. Period.

And consistency has value.

Sighs.

And there are use cases for changing what attributes map to what indices after linking too. Or for changing what fragment shaders are attached to what vertex shaders after linking. Etc.

You may not like it. I may not like it. But it is consistent with how the ARB has developed GLSL: these kinds of things are baked into the object at link time and remain fixed from that point on. Period.

More BS from Alfonse, it is Friday, oh well.

The spec where transform feedback came from was originally (the graddaddy so to speak) NV_transform_feedback fit changing what varyings were recorded just fine with ARB GLSL model. It added an entry point to say “do not optimize a varying out”, it has an entry point to query what varying are alive after link, etc. No where am I saying change the indices what attributes are tied to, or changing anything else.

Alfonse makes me sigh and once again a perfectly reasonable thread has degenerated into Alfonse blabber of uselessness.

The spec where transform feedback came from was originally (the graddaddy so to speak) NV_transform_feedback fit changing what varyings were recorded just fine with ARB GLSL model.

You’re saying this as though it’s not a matter of public record. I’m simply pointing out that the ARB version is more consistent with how the rest of GLSL works. If you don’t like that, tough: it’s still true and no amount of ad-hom will change that fact.

Remember: you asked the question as to why the ARB version is different. I simply provided an answer. Unless you have some reason to suggest that the NVIDIA version is in fact more consistent with the GLSL API, your argument fails.

NVIDIA rarely gives 2 thoughts to GLSL consistency; they do things the way they want to and that’s the end of it. It may ultimately be better than what the ARB does, but it isn’t as consistent with the rest of the API.

More BS from Alfonse (seconded). When you start your senseless NVidia bashing, know that you’ve lost the argument.

Beyond that, NVidia driver guys get performance, stability, and results with GL, and are among the OpenGL strongest advocates. Hats off to them. Why you have this senseless dislike for them is beyond me.

“Consistency has value”, but is trumped by efficiency and flexibility. Don’t get so hung up on the past…

Just to add boiling oil on the fire: there is absolutely nothing inconsistent about changing what varying are to be recorded. Taking Alfonse’s blabber further, I suppose we should also have to compile a special shader when we wish for an attribute to be fixed as well.

I will confess freely: NVidia’s GL implementation rocks. It rocks in MS-Windows. It rocks in Linux. It works. The people behind it are wonderful, bugs get fixed when I have reported them. They have, in my eyes, pushed GL forward. The best, most exciting extensions come from NVidia: GL_NV_shader_buffer_load, no extension on GL3 hardware compares to that. Now in GL4 land: GL_NV_shader_buffer_store and GL_EXT_shader_image_load_store. It does not stop there: OpenGL/OpenCL interop and OpenGL/CUDA interop [I can almost see Alfonse saying “CUDA is NVIDIA only”]. GL_EXT_direct_state_access, on NVidia hardware AND many NV extensions have entry points to be direct state access like too.

More BS from Alfonse (seconded). When you start your senseless NVidia bashing, know that you’ve lost the argument.

It’s funny. The NVIDIA version of the extension is inconsistent with standard GLSL principles. Namely, baking everything in a program at link time. Thus it is very clear that NVIDIA did not value consistency with standard GLSL practice.

But if you want some more evidence, look no further than EXT_separate_shader_objects. This extension cannot be used with user-defined varyings. It must use built-in varyings, virtually all of which are not available in core 1.40 or above. So NVIDIA created an extension which is not just inconsistent with existing GLSL practice, it is 100% incompatible with it.

NV_vertex_array_range. It encapsulates memory in a way that is fundamentally antithetical to how OpenGL has ever operated. Something similar could be said for NV_shader_buffer_load/NV_vertex_buffer_unified_memory

I can keep going. You may like these extensions. I may like these extensions. But that doesn’t change the fact that NVIDIA has a history of making extensions that do not do things the way OpenGL has done them. This is not a statement about whether I think NVIDIA’s way is better or worse; it is simply pointing out the truth.

but is trumped by efficiency and flexibility.

The both of you seem to mistake facts for value judgments. All I said was that the reason the ARB used this method for transform feedback was that it was consistent with existing GLSL practice. I did not state or imply whether I think that it is a good idea, or whether consistency could or could not in this instance be trumped by other concerns. I’m simply stating what is most likely their reasons for implementing it as such.

NVidia’s GL implementation rocks. It rocks in MS-Windows. It rocks in Linux. It works. The people behind it are wonderful, bugs get fixed when I have reported them. They have, in my eyes, pushed GL forward.

It’s funny how nobody said anything contrary to that. I don’t know how it is you go from “NVIDIA has written quite a few extensions that are inconsistent with existing OpenGL practice” to “NVIDIA is crap.”

Yes, I like NVIDIA’s GL implementation. However, I also live in a world where ATI has plenty of good hardware out there too. A world where they have a pretty good OpenGL implementation too. So vendor-lockin is not something I’m interested in.

there is absolutely nothing inconsistent about changing what varying are to be recorded.

I found this in the specification:

Yes, that does make the current transform feedback inconsistent with existing GL practice. But simply removing this prohibition would make it consistent with GL practice. Taking it to where NVIDIA did would be similarly inconsistent, if a bit more flexible.

Why do we even bother replying to Alfonse.

Here is a question for you Alfonse: explain exactly how choosing to record or not record a varying after it links is inconsistent.

Something similar could be said for NV_shader_buffer_load/NV_vertex_buffer_unified_memory

I’ve already gone on and on about this with Alfonse… no point doing it again. Let me just summarize: I think on this issue Alfonse is a putz.

But if you want some more evidence, look no further than EXT_separate_shader_objects. This extension cannot be used with user-defined varyings. It must use built-in varyings, virtually all of which are not available in core 1.40 or above. So NVIDIA created an extension which is not just inconsistent with existing GLSL practice, it is 100% incompatible with it.

You know if you actually read the spec, you will have noticed it was marked as “look you can do this within the confines of GL, mostly”. The issue is that they needed to bind by resource, not name. Rather than modifying GLSL further, they chose the route of using existing GLSL variables that are in the compatibility profile. If you take a gander over at geometry shaders, the extensions for that before it was in the 3.2 made it a point to not modify the grammar of GLSL, it was with GL3.2/GLSL1.5 that geometry shaders came with a modification to the GLSL language.

Let’s make a bet: if and when separate shader objects come to GL core, it will modify GLSL (my bet is by adding more role to layout).

I am feeding a troll I think and at this point enough is enough.

I wonder what Alfonse thinks of using Cg in GL then. No scratch that, I really don’t wonder and I really don’t want to hear more of his sewage.

I’m wondering the same thing. You seem to consistently miss the point he is making and instead try to make it sound as if he’s implying stuff he is not. You’re also resorting to kindergarden-level name calling, which is totally uncalled for and makes it appear like you have no good arguments.

There might be some very good arguments for why the NVIDIA way is superior. There might be some very good arguments for why the ARB did things the way they did. How about discussing those points in a serious and grown up manner?

Hmm… Lord crc is likely correct that I am be less than civil.

In a sincere attempt:

At any rate, Alfonse’s big point is that a GLSL program is entirely baked at link time. That is fine. However, in no way does this contradict the idea that one can change way varying to record during transform feedback. In fact, the NV specification is consistent with the “all baked at link time” deal since to guarantee that a varying can be recorded, there is a function call.

Worse, on the issue of GL_NV_shader_buffer_load: one can look up the flames between myself and Alfonse. No where did he make a clear argument or statement how it contradicts any convention of GL. My take was that he had issue that you could fetch a “GPUAdress” value of a buffer object, store that value in another buffer object so it can fetch values from the first buffer object. However, I do not see how this violates GL convention: buffer objects are memory that to read or write must be done through GL. Perhaps his other beef was that one can save that address and use a different call to set the buffer object as a source for GL programs, again though this did not violate the convention that a GL buffer object is a blob of bytes that in order to read or write must ve done through GL.

Continuing: his objection with separate shader objects: the NV specification uses built in varyings (only available in compatibility profile). This too is completely consistent with ARB conventions where one can mix fixed function pipeline with a programmable pipeline (i.e. just write a vertex shader and no fragment shader, etc). Moreover, as additional evidence that this is consistent with ARB conventions, it do not modify the grammar of GLSL at all.