API evolution - idea exchange

This thread is to talk about what we want to add to GL going forward, and hash out any ideas on the various features we want to add to GL3+, and how things are to be used.

e.g.
I would like to see a .glfx file format for all the shaders wrapped into one file that encompasses all the texture states and uniforms. I know some have no desire for this, but others do, so topics like this should be discussed here to get to the bottom of why its a good idea or not.

On the topic of .glfx files, I would like to see the shaders main() changed to

//vs
void VSMain(void)
//fs
void FSMain(void)
//gs
void GSMain(void)

to clear up the confusion of what function goes to what shader type. This is one reason why FxComposer hasn’t had GLSL support, as the entry points are all generic void main(void) calls.

That never made sense to me either (everything called main).

A really good distinction was made by Cass in another thread where he contrasted something like Cg/HLSL, as content, with GLSL, which is not really content. I think if you really get your head around this distinction GLSL makes a lot more sense.

Remember this thread is welcome to anyone with bitches, suggestions, ect… on GL3+ so we have a log and discussion on what should be changed and if everyone agrees then lets push it to the ARB, for review. IMO off the top of my head my complaint as I have stated seems like a reasonable one. I would also like to know why one wouldn’t want to use a .glfx format to store all your shader code in one file… I would like GL to have a function that just loads this single file…

glLoadGLFX(const char* filename, GLuint& programID);

anyway something along those lines and the file format could be done in .xml or .txt whatever I don’t care as long as I don’t have to parse the dam thing at let GL do it then.

i still think the fx format stuff should be a separate library, but one thing that i found very cool about long peaks was #include… they wanted to do something about named text buffers, which can be referenced by the shaders. i also still think gl should not have anything to do with file handling.

Honestly, glFX is the last thing the ARB needs to get working on.

Until we can build separate program stages without the heavyweight link stuff for every combination of stages, until we have instanced programs (separation of programs and data), until we can bind textures directly to programs, etc.

Until all that stuff gets into the core (not just meaningless extensions), glFX is meaningless. Whether it’s main() or whatever is also the last thing that needs to be dealt with.

The problem with the ARB is a complete lack of prioritization of features. Long-time pain points of OpenGL go unresolved, but hey, we’ve got an integer pipeline for shader stuff.

just skipping through the 2007 BOF slides:

  • image objects, filter objects and format objects
  • state objects
  • uniform buffers

they were (still are?*) on the right track.

*any BOF 2008 slides?

I also would like to have this changed, however I don’t see it as a reason for FXComposer not supporting GLSL. And I say that because other vendors do provide editors that support GLSL properly.

Maybe the reason only has to do how FXComposer was designed, I guess.

@Korval: glFX is not something that competes for time among the OpenGL working group members as far as I can tell. It’s a separate group.

@Chris: Backing uniforms with buffers is one of the higher priority items in the queue for the next release (whether it is called 3.1 or something else). The existing bindable-uniform extension addresses some ISV goals but not others, so we are looking at something richer.

If you want a list of pain-points in OpenGL’s API, in priority order, from most important to least:

1: Inability to mix&match vertex and fragment (and soon geometry and other) shaders without having to do a heavy-weight linking operation for each match.

2: Inability to have program state data be separated from the fully-linked program, so that one can have program state data that is instanced between all models that use the same program.

3: Inability to have uniform values that are shared between separate programs, without resorting to build-ins.

4: Framebuffer UNSUPPORTED. That is, we do everything right, and for a reason that the driver won’t even deign to explain, it says no. This makes framebuffers completely unreliable, and therefore unusable.

5: Not being able to attach textures directly to the programs that use them; instead, we have to use this ridiculously circuitous method (which is very unintuitive).

6: Binary blobs for shaders.

And the way stupid thing? All of the above will require higher-end graphics hardware (G70+, R600+), even though older hardware can support it just fine. The ARB entangled added graphics functionality with any API improvements, and that’s not acceptable.

Not unsurprisingly, these are exactly the problems that Longs Peak was going to solve. So it’s not like the ARB doesn’t already have working knowledge of what needs to be fixed. They just never fix it.

glFX is not something that competes for time among the OpenGL working group members as far as I can tell. It’s a separate group.

Yes, but glFX doesn’t control glslang. Changing the definition of “main” would require changing glslang.

Korval, I completely agree with items 1, 2, 3, and 6.

Re #4 - this would happen when you requested a configuration of framebuffer that could not be satisfied either due to
a - hardware cannot do the thing you want (HW limit)
b - the driver doesn’t have a code path that can satisfy your req (driver limit or bug).

Format objects were a part of the LP concept, but consider what happens in each category above.

If “a”, when run on the same machine, you would not be able to create the format object you wanted. So no new capability is created, you just get the error earlier.

If “b” - the outcome is the same as well. You could get the error sooner with an LP format object, but this doesn’t tell you why (bug or HW limit).

It’s important to note that the hardware targeted by GL3 has greatly relaxed restrictions in this area, so the set of FB configs in set “a” will be a lot smaller.

With the Image Format stuff, an implementation could only deny you if the arrangement of image formats didn’t work out. Which means that you are guaranteed that any textures that use those formats will work when bound to that FBO. There is no API guarantee that a similar texture to the one that did work will continue to work, since there is no way to define what a “similar” texture is.

I suppose, thinking about it now, that binary blobs should be higher than the framebuffer issue. But it does need to be addressed.

So we are all in agreement that we need to address these key issues below in GL3.1 as of now…

1: Inability to mix&match vertex and fragment (and soon geometry and other) shaders without having to do a heavy-weight linking operation for each match.

2: Inability to have program state data be separated from the fully-linked program, so that one can have program state data that is instanced between all models that use the same program.

3: Inability to have uniform values that are shared between separate programs, without resorting to build-ins.

6: Binary blobs for shaders.

2 & 3 look like facets of the same issue to me.

2 & 3 look like facets of the same issue to me.

It depends on how #2 is implemented, but usually not.

Let’s say you have a bunch of characters that all render using the same shader program. However, some of them will have different uniform parameters (colors, for example); this is a value you will set on a per-use basis. That is, in the engine, each entity got a color assigned to it. Some may be the same, but you want there to be different colors for different entities.

The problem is that there is one shader program object (because you’re certainly not going to link the same program object once per entity). It has one set of state. If you want to use it to draw multiple entities using the same program, you have to change the uniform for every rendering. Even if the internal value that you’re setting the uniform off of never changes, you’re still having to update this state just to make the API happy.

To be honest, a simple “copy program object” extension would be good enough to do #2. That is, you would get a program object that would function as though the same shaders were linked, but it would have an independent set of state on it.

The main issue that #3 is trying to solve is sharing state among multiple programs, so that you can change a uniform in one place and have it naturally propagate up through all of the programs.

There are ways to implement #2 such that #3 is solved, but doing it really requires breaking the concept of “program object” apart into its constituent components. That’s going to deprecate a lot of API, and do so fairly unnecessarily. Longs Peak was going to get there, but I thought the whole point of the “not-Longs Peak” was to make the smallest changes needed to get the functionality done.

by “sharing state among multiple programs” what kind of data do you talk about? Is this could be uniforms (I think it does, looking at built-ins)? Which are after all “application state”.

If uniform can be shared between programs, I don’t see the usability of #2 which imply a program state shared between all shaders attached to it. Programmers could share data at shader level (#2) as well as at program level (#3). Two way to do the apparently do the same thing…just to keep the " concept of “program object”" that seem to not have any sense with #3 since the nature of the data shared at program level and shader level is apparently the the same.

In conclusion, I think that #3 only have a sense, not #2 and #3, or a well specific border need to be defined between this two features.

Even if all uniforms were stored in buffer objects you’d still want the bindings between program and buffer objects in a separate object. That way you can quickly restore it for any mesh.

Programmers could share data at shader level (#2) as well as at program level (#3).

Shaders don’t have state. Well, not anything significant. Shaders do not have uniform state, which is what matters.

You should spend some time familiarizing yourself with the shader/program dichotomy that OpenGL’s shading language uses. It’s different from most other systems.

I think I did not formulate correctly my sentence. By sharing at shader level I was talking about the program state used to share data among shader objects. So in the end shared data is used at shader level.

And by program level, just data shared between programs that could be used in shaders that are in different programs.

By sharing at shader level I was talking about the program state used to share data among shader objects.

Shader objects don’t have state. Like I said, you need to read up on how shader objects and program objects work.

Shader objects do not have uniforms. Program objects do. It’s really that simple.