Pipeline Newsletter Volume 4

well it looks like they’re just like d3d vertex declarations…
http://msdn2.microsoft.com/en-us/library/bb206335.aspx

skynet, from my understanding of it, and judging by the diagram, the buffer objects used by the VAO are mutable, the rest isn’t.

Jan, looking at it that way draw objects are certainly not a good idea and I can see now why they’ve done it how they have.

Regarding the program objects they have (according to the diagram) the following attachment points:
[ul][li]Vertex program[]Fragment program[]Buffer object (used for the uniforms)[*]Image objects and Texture Filter Objects. (probably set in one go, see this , the glUniformSampler line)[/ul][/li]
Regards
elFarto

A couple of impressions and an exclamation …

  1. Increment of ref count when used – Personally I’d prefer managing object lifetimes myself, without having to second guess the API.

  2. Default framebuffer – I don’t mind supplying my own default framebuffer, in fact I’d prefer it that way.

  3. Debug context – oh yeah!

Originally posted by bonehead:
[b]

  1. Increment of ref count when used – Personally I’d prefer managing object lifetimes myself, without having to second guess the API.
    [/b]
    Any OGL specification which creates objects in current OGL version specifies what happens, when you delete object which is currently bound somewhere. The driver then needs to correctly implement that behavior which might have some performance and implementation cost for handling of this very special case which almost never happens.

Such situation can not happen with the reference counting approach because the object has at least one reference as long it is bound somewhere so both specification and driver does not need to have special case to handle it.

When you also introduce a “drawobject”, that would mean you need to have an awful lot of drawobjects. I am thinking about my octree, which has over 20 thousand nodes and through culling there can be many different combinations of parts of the array i want to render. So, doing this on-the-fly is, in my opinion, the way to go.
Why? What’s wrong with having 20,000 VAOs? Odds are that you have 20,000 C++ objects, one for each node, anyway. It’s not that much more memory for the implementation. A few pointers, some offsets, a couple of stride parameters. Hell, you’re going to have to store that information yourself anyway. Just let the driver do its job and stay out of it.

VAO’s aren’t really on-chip memory; they’re client memory allocated and stored in the implementation. They contain references to objects and some state data for them.

So if you can allocate 20,000 C++ objects, why can’t GL?

However, it sounds like, when changing some buffer-object (that contains uniforms), the shader needs to be validated (linked?), because that buffer might have a different layout.
Why would it? The program doesn’t change just because some uniforms changed, nor does the buffer need to be validated or have its layout changed.

What it does mean is that nonsensical nVidia “optimization” where they recompile the shader if you change certain uniform values goes away. But as far as I’m concerned, that’s the way it should be.

Would that mean, I´d have to create another VAO for each object just for this pass?
Why not? You need to have a different program object and set of blending parameters (maybe) anyway, so what’s one more object? From an API standpoint, I prefer that it have an entirely separate VAO, just so that it matches with its entirely separate program and separate blend parameters.

We are talking about an object that takes up, maybe, 32 bytes per vertex attribute. And that’s worst-case; it’s probably more like 16 (offset, stride, BO-pointer, and an enum/switch/bitfield for the format [int, short, etc]).

Personally I’d prefer managing object lifetimes myself, without having to second guess the API.
Those objects are owned by the server, so no, you won’t be doing that. You aren’t doing it in 2.1, and you won’t be in the future.

Originally posted by elFarto:
Of course this raises the question, why can’t all these options + all the objects that are bound to the context (fbo/vbo/program objects/etc…) be wrapped up into a ‘Draw Object’? Then the draw command is just lpDraw(drawObject);
Really, what use would that be?
The common case is state sorted, only an idiot would package everything up like that and just draw in an arbitrary order. I can’t imagine a scenario where I would actually use the proposed drawobject, except perhaps in some weird prototype app.
In any case, this stuff would probably be folded into the display list object, when they get round to it.

Suggestion to the ARB, on the “per-sample operation” object:

One object for all of these parameters is wrong.

From the user’s perspective, blending and, say, depth testing are two different settings that are set from two different places. A user’s object would know how it blends, so it should have its blend parameters/object/etc. In that way, it is similar to a program object.

But why would the object decide how the depth test works? How depth testing happens is not really something the object needs to be aware of. Currently, that’s a sort of “set and forget” parameter. You set it globally, and change it very infrequently. Certainly not on a per-user-object basis.

From the user’s point of view, lumping the depth test in with the blend functions is asking for trouble. It makes it hard to change the paramter globally, as you have to go around and change it in all of the objects that render with it.

Originally posted by knackered:
Really, what use would that be?
None, as I’m starting to see. Just seeing all the shiney new objects, I was wondering why drawing didn’t have an object, and it makes sense that it doesn’t.

Regards
elFarto

Originally posted by Korval:
But why would the object decide how the depth test works? How depth testing happens is not really something the object needs to be aware of. Currently, that’s a sort of “set and forget” parameter. You set it globally, and change it very infrequently. Certainly not on a per-user-object basis.
What the heck?
I haven’t had the time to read this stuff but hope it’s going to be sensible. The whole idea of a new GL was to clean out the clutter and be a thin layer. The state machine is a beautiful thing.

Or maybe they want the design it in a such a way that a fixed amount of data is sent to the GPU everytime a draw call is made.

One object for all of these parameters is wrong.
Hasenpfeffer.

Or maybe they want the design it in a such a way that a fixed amount of data is sent to the GPU everytime a draw call is made.
They’re trying to make it so that a future “blend shader” can easily be dropped into place, without the API cruft that accrued when GL went with shaders.

The only thing, that comes to mind right now, is that drawcalls do not include an “offset” parameter for the indices (offset added to each index, not the thing the “first” parameter is used for).
As currently defined, you specify an offset when attaching a buffer to the VAO (IOTW, it is a mutable VAO attribute). I didn’t have room to go very deeply into the individual object attributes and behaviors in that article.

Originally posted by skynet:
Anyway, looks like VAOs need some more detailed explanation :slight_smile:
Read the spec shipping since 2002 .

The APPLE_vertex_array_object specs almost certainly don’t explain how VAO’s are expected to work in LP. And I dont’t think that they are in any way similar, except maybe the idea “lets put the array bindings and enables into an easily bindable object”.

One thing I’ve just remembered. In volume 3 of the pipeline, there is a piece of example code for creating an image object. There are 2 lines in particular I’m interested in:

GLtemplate template = glCreateTemplate(GL_IMAGE_OBJECT);

GLbuffer image = glCreateImage(template);

Is it your intention to have a glCreate{Image,Buffer,Sampler,Sandwich,…} function for every object type? This will increase the amount of functions.

Is it possible to have a glCreateObject(GLtemplate template); function for all object types instead?

Actually I’ve just relised this is because the return type can then be checked by the compiler. Prehaps a macro for this:

#define glCreateImage(t) ((GLbuffer) glCreateObject(t))

Regards
elFarto

Actually I would prefer syntax like

lpCreateObject(template, obj_type)

and

lpBindObject(obj, obj_type)

to having N versions for each object type

Originally posted by Zengar:
[b] Actually I would prefer syntax like

lpCreateObject(template, obj_type)

and

lpBindObject(obj, obj_type)

to having N versions for each object type [/b]
Passing the object type to create object would be redundant. You’ve already specified it in the create template, the driver can just stick the type in the GLtemplate structure, and use it when you call createObject. But you would need the #define’s to gain some form of type safety with this method.

I do like your lpBindObject though. In the pipeline, they say the context is like a container, so it stands to reason it could have attachment points just like the other containers. Making these attachment points more extensible can only be a good thing. Eg:

lpBindObject(LP_VERTEX_ARRAY, vao);
lpBindObject(LP_PROGRAM_OBJECT, program);
lpBindObject(LP_FRAMEBUFFER, fbo);
lpBindObject(LP_SAMPLE_OPS, sampleops);
lpBindObject(LP_MISC, misc);

Regards
elFarto

I do agree, that one object for sample-parameters is a bit clumsy. D3D10 distinguishes 4 or 5 pipeline stages and has therefore 4 or 5 such parameter blocks. And i think they, too, handle all the blending-stuff as a separate stage.

If one would later on drop in a blend-shader, this would make even more sense, because then you just ignore the whole blend-state, which is now part of some other state. It would be more modular.

I don’t think there should be one lpCreateObject and one lpBindObject. This restricts you very much. That would mean all objects would need to be described using a GLtemplate. Additionally your compiler can’t aid you with any kind of type-checking.

I think having several functions lpCreateImage, lpCreateShader, … is the better way to go. There is only a handful of types, so the growth in necessary functions is not an issue. It doesn’t grow exponentially, there are no such dependencies. Having a separate create-function means you can have separate template-structures. That makes code more readable, drivers easier to implement, compilers can check types AND you can extend it much better. If there is a new type, the extension just adds a create and a bind-function, that’s it.

Doing it by declaring different enums, that you pass to the function is more or less the same thing, just less flexible.

Jan.

Please don’t flame me but do we really need long peeks?
To me it seems that OpenGL 2.1 is a just little messed up form 20 years of evolution, why not remove the redundant calls/types… (like OpenGL ES 2.0).
I love the way OpenGL is organized, the current state machine is very powerful and elegant (it does have limitation on debugging/understanding) .
To me long peeks seems like some DirectX version.
Yes it will be easier for driver writers and have fewer overhead but all this can be achieved with current OpenGL going the way the ES went.

I know Korval,knackered and the rest will flame me but still do we need it?
Ido

Originally posted by Jan:
I do agree, that one object for sample-parameters is a bit clumsy. D3D10 distinguishes 4 or 5 pipeline stages and has therefore 4 or 5 such parameter blocks. And i think they, too, handle all the blending-stuff as a separate stage.
If you want to compare it with D3D10 you have to count 3 state objects. Rasterizer, Blend and DepthStencil.

The two other D3D10 state objects are the Input layout and the sampler. The first one can somewhat compared with the vertex array object but it stores only the vertex layout and no references to the buffers.

The sampler state could be compared with a texture filter object.