Setting a vertex base index in a glDrawElements call

Originally posted by l_belev:
The fact that index offset would not be too usefull for glArrayElement is not enough reason to cut if off. In fact glArrayElement itself is rarely usefull in practice, so this whole issue isn’t of much importance, but what’s more important here is to preserve the specification clarity and consistence. I think the original variant is better - it does not introduce unnecessary discrimination between glArrayElement and glDrawElements.
Which was, in a way, my point. idr’s preliminary spec creates a new dependency on vertex array state, and affects all entry points referencing vertex arrays.

Look at was asked for in the thread topic. We don’t need an extension/change to ArrayElement behaviour per se, because ArrayElement and DrawArrays behaviour already have this functionality built in. I find it undesirable to insert another hook there, because the whole purpose is extending DrawElements functionality. ArrayElement is not the right place to extend. DrawElements itself is the right place.

It might even be better to not have a state value for this at all, and just insert a new argument into an extended glDrawElements entry point (with whatever name seems appropriate). This would eliminate all potential confusion, at the cost of a few entry points.

Otherwise one would have to remember one more unnecessary rule - when the index base is applied and when not.
If you just think of it as extending DrawElements and friends, I’d think that’s easy enough to remember. An “Overview” section might word it as affecting element indices sourced from an index array – VBO ARRAY_BUFFERs or pointers to client memory, as handed to DrawElements, MultiDrawElements and DrawRangeElements. Nothing else. This is an overall smaller change to the GL spec.

For the glDrawArrays the base index isnt applied anyway, so it is out of question.
As of idr’s preliminary spec, it would affect DrawArrays. Again, I find this redundant.

[b]Originally posted by zeckensack:

[quote]Otherwise one would have to remember one more unnecessary rule - when the index base is applied and when not.
If you just think of it as extending DrawElements and friends, I’d think that’s easy enough to remember. An “Overview” section might word it as affecting element indices sourced from an index array – VBO ARRAY_BUFFERs or pointers to client memory, as handed to DrawElements, MultiDrawElements and DrawRangeElements. Nothing else. This is an overall smaller change to the GL spec.[/b][/QUOTE]I can understand your point. This is one of the cases when different people look from slightly different angles at the same thing. It is hard to say which is the <right> point of view since for such kind of matter there is no universal truth. But I still think that using the index base for the glArrayElement as well as for glDrawElements is somewhat more-consistent.

Here is an example when it would be useful: Imagine that normally you use glDrawElements(or friend) for drawing something, but then you write another path for the same thing, using glBegin/glEnd and glArrayElement for debugging purposes, because you want to do something specific for every vertex drawn (can’t think what exactly at the moment). And you want to separate the setting of the index base and the actual drawing to be located in different places in your software (a question of organization). Then obviously it would be cleaner if the index base works for the glArrayElement too.

On the other hand I can’t think of a case when the applying of the index base for glArrayElement would be bad thing or obstackle for something.

[quote]For the glDrawArrays the base index isnt applied anyway, so it is out of question.
As of idr’s preliminary spec, it would affect DrawArrays. Again, I find this redundant.
[/QUOTE]Well, I didn’t see it. Probably that would be unnecessary.

But don’t worry. If this extension reach to the ARB for consideration, they would clear away any such little issues.

I have to admit, I specified things the way I did for somewhat selfish reasons. I found that just modifying the behavior of ArrayElement was very easy to specify. That makes the extension spec short and sweet.

Additionally, just modifying the behavior of ArrayElement makes it much easier for me to implement the extension in the drivers that I maintain. Not only am I concerned about the software paths in Mesa, but I also work on the hardware TNL paths for the open-source R100 and R200 drivers. There are certain cases in those drivers where the DrawElements / DrawRangeElementes implementation can fall back to actually calling ArrayElement. It turns out to be much easier for me to either poke the hardware’s base-index register or, in the fallback case, just let ArrayElement do the work. Modifying the various DrawElements type calls would require modifying more code in places. To me, that equates to more chances for bugs.

I am strongly opposed to adding another drawing entry-point. To keep the spec consistent, we’d have to add at least 3 new entry-points: one for each of DrawElements, DrawRangeElements, and MultiDrawElements. I’m sure some sick puppy would also ask for a modified version of MultiModeDrawElementsIBM. :wink: If a function like MultiDrawRangeElements is added at some point, a special version of that would also be needed. Having watched and taken part in a few ARB votes, I can say that most people are opposed to APIs that may lead to an “explosion” of entry-points.

We’d also have to specify the interactions with ATI_element_array , APPLE_element_array , IBM_multimode_draw_arrays , and (probably) SUN_mesh_array . The current spec lets us just say that this extension implicitly affects any function that does vertex-array rendering in the “obvious” way.

Additionally, if you think about code that has to support the cases where the extension is and isn’t supported, resetting the base index is a very natural way to do it:

if ( MESAX_array_element_base_supported ) {
    glArrayElementBaseMESAX( new_base );
}
else {
    /* Adjust the vertex-array pointers to account
     * for the new base.
     */
    glVertexPointer( ... );
    ...
}

glDrawRangeElements( ... );

I think I’m going to implement the spec pretty much as-is in Mesa and let people play with it. Implementation experience (both driver and application) will tell us where to go from there.

Originally posted by idr:
We’d also have to specify the interactions with ATI_element_array , APPLE_element_array , IBM_multimode_draw_arrays , and (probably) SUN_mesh_array . The current spec lets us just say that this extension implicitly affects any function that does vertex-array rendering in the “obvious” way.
Maybe it’s the case to consider NV_primitive_restart ? It works with indices after all and it doesn’t seems to take a lot of time to specify.

Originally posted by l_belev:
Neither can I think of such problem. As I said, I just gave this as an example and didnt mean to push for it. But you didn’t read the entire my post, did you
My apologies but it’s more likely I’ve read and forgot it while writing (whoops :rolleyes: my fault).

Here’s another vote for the extension. I envision a slightly different use for it than the examples that were mentioned here. See this thread .

No, I don’t believe the extension will help too much with that. Specifying the buffer offsets once a frame, or even once every patch wouldn’t impact on performance too much - respecifying all the offsets once per object is the real hurt.
Your question could be answered with simply respecifying your VBO offsets for each patch, or whatever scheme you’re using (haven’t read your thread too carefully).
But you’re right, this extension would be extremely useful even for you, I suppose - in fact, the whole vertex/index array mechanism looks distinctly odd without it (see d3d9 again).

The usefulness of this extension depends on this: will the driver simply do the respecifying for you? If so, then you don’t gain any performance by it. Considering nVidia’s more unusual implementation of VBO, it is entirely possible that the same performance penalty that one would encure from calling a gl*Pointer would appear from setting the offset.

Bump

I’m still interested. Any reason not to have this?

If first limiting the scope to VBO, the last argument/parameter for e.g. glVertexPointer is basically the same thing as this proposal, is it not? As such, in the scope of VBO, is it really useful?

For non-VBO, I can see some, limited use for this (perhaps the negative index is a bit enthusiastic, but who am I to judge :slight_smile: ). The question for me is however, would this be of enough utility for enough users, both today and in the future (where “in the future” holds way more weight I think) to warrant a “real” proposal? In the light of VBO? After all, this proposal would be added at a time where VBO is already in place.

Having read the thread, and given it a bit of thought, I currently lean towards not.

Perhaps the currently largest argument against it is that, as has been noted in the thread, this seems to not have been a very frequent request. That does not invalidate the request, it might even just indicate that noone ever considered it previously and there might be uses for it. It would however create ripples all through the spec for all array types.

My personal feeling is however, currently, that this is something I as the application programmer, or the scene graph software, or whatever role I’m playing can handle better. It is state I need to keep track of anyway (and way more, in addition to the driver needing to do it too), and it might be “expensive” to send a 64K short array to the server today - but it’s getting cheaper (time-wise) by the day. Would we want to encumber the spec with this for all times, just to possibly get some speed-up today?

Just add a few wrapper functions and be done with it. :slight_smile:

If first limiting the scope to VBO, the last argument/parameter for e.g. glVertexPointer is basically the same thing as this proposal, is it not? As such, in the scope of VBO, is it really useful?
The usefulness comes from implementation-specific issues. Namely that glPointer calls are “expensive”. Why they are expensive in the scope of the same buffer object (I understand why an actual change of a buffer would be somewhat expensive), I don’t know. But the evidence seems to point to some cost associated with glPointer calls. So, they want a lightweight way to tell the system to just shift the read pointer X entries over.

Would we want to encumber the spec with this for all times, just to possibly get some speed-up today?
Sure. Why not? Isn’t programming today important?

I´m still voting for such an extension. As I said earlier, changing the Pointer(s) (or offsets in case of VBO) is cumbersome, since you have to remember what arrays are activated and how they are laid out (the app has to “log” its own glXXXPointer calls).
Using this extension allows to changes all pointers in a single call, all of them with the correct stride. Also, it would allow to easily store more than 64k of vertices in a single VBO, but still use 16bit indices for the glDrawElements() calls.
It is possible to do these things today already, but this new extension would give us a way to achieve the same with less effort and possibly enables the driver to do it more efficiently. Even if it boils down to doing the glXXXPointer calls in the driver, it wouldn´t be much of a problem (no loss in speed, more convenience for us)

The other guys mentioned the essential stuff.

I my case, I would either have to make many gl*Pointer calls, or use 32 bit indices, both of which can drag down performance.

With offsetting, my programming becomes simpler and I would be making much less GL calls.

I think offsetting is a basic need. It need not be just about performance.

It need not be just about performance.
While I agree that this is a useful feature, I don’t see how it is about anything but performance. The number of gl*Pointer calls is relevant only in terms of performance, as is the use of 32-bit indices (except for the memory overhead, but that is only a problem in terms of performance). This is a feature like VBOs that exists only for the purpose of performance.

as i read this proprosal, i’m strongly for it. especially if the offset switch can be compiled into a display list and the offset is absolute.

the only fault is it encourages or requires programmers to preallocate their memory in large continuous blocks… so that you have to guess your eventual memory needs rather than allocating on the go.

if you have a static environment like knackered’s as i read him, that is probably not a bother at all. but if the ‘chunked’ geometry is streamed dynamicly, you risk having unused memory, or worse having to reallocate memory. but if i offers a performance gain i would take it. anything that can be accepted and gives driver/bus a head up i’m for it.

if someone plans to ever take this matter to task, and wants to collect signatures or emails for notification, i would pledge my name.

sincerely,

michael

Some posters seem to undervalue the point of performace a bit, like in “it’s only an optimisation”.
Let’s not forget, hardware acceleration is all about performance in the first place.

An equivalent to a “base vertex index” in OpenGL would be a very useful addition!

It would especially help people who have to design an API independent renderer (= me!) when one API has an index offset semantic and the other has not.

cheers

I’m going to bump this topic again, seeing as though OpenGL is in the process of being re-specified.
It would be a crime for this proposed mechanism not to be implemented in the new API.

As I understand it, in the new API there will be no current vertex array, you specify one (encapsulated in some kind of object) as parameter to the draw call.

This implies that using a different vertex array for each draw call is cheap.

So do we really need a base index any longer? Because the problem this feature is trying to solve (gl*Pointer is expensive) no longer exists in the new API.

Mmm, fair point. But the driver must be comparing the previous vertex array to the current one set up on the card and switching if necessary. Just because this is exposed to the user as a transparent ‘use this vertex buffer’ doesn’t necessarily mean it will be cheap to use a different one in every call.
Therefore the argument for an index buffer offset is still valid.

Nobody working on GL-next has stated that draw calls will take a vertex array object as a parameter. This is pure speculation, as no such decision has been made.

I’m not going to commit to a specific solution to vertex base issue raised here, but it will be taken into consideration.

There is an underutilized forum for new feature requests: the “Suggestions for the next release of OpenGL” section. Ideas buried in threads in the Advanced forum are less likely to be noticed.

Ok, I’ve put an entry on the “Suggestions for the next release of OpenGL”.
http://www.opengl.org/cgi-bin/ubb/ultimatebb.cgi?ubb=get_topic;f=7;t=000581