Geometry instancing

It would be great to add support for accelerated geometry instancing based on single vertex for a model and multiple transformations/vertex shaders.

I can Imagine something like this to be added soon as part of OpenGL 2.1:

glEnable(GL_INSTANCING);

glColorInstancePointer(0, vert color array 0);
glTransformInstancePointer(0, mv mat 0);

glColorInstancePointer(1, vert color array 1);
glTransformInstancePointer(1, mv mat 1);
.
.
.
glColorInstancePointer(i, vert color array i);
glTransformInstancePointer(i, vert prog i);

glMultiDraw*(…);

for each draw op(i) :
use the same vertex data
if instancing enabled then
use the array specified by the
InstancePointer calls
else
use the array specified by the corresponding
vertex data Pointer call

Lets elaborate on solution and help promote OpenGL

The only way we can imporve OpenGL and help it grow and compete with D3D and encourage developer to switch to this great API (wetting their appetite) by suggestions for future releases, even if we come to approved extensions that work fast.

However, just nagging and blaming Microsoft is not the solution. At least Microsoft is supporting their API’s no matter what policies they have in mind. But we blame it for everything not supporting other APIs or OS’s, why should they do this?

Have you read the ARB meeting notes?
The ARB (Architectural Review Board) meets and discusses subjects like this.

This one is from December 7-8, 2004

http://www.opengl.org/about/arb/notes/meeting_note_2004-12-07.html

Search for the word “instancing”.
You may also want to search this board and the “Advance GL” board for the topic.

The current outcome is that GL doesn’t need it due to its design.
If you think you need it, please state your reasons.

Suppose a high resolution character needs to be rendered in a scence multiple of times each with different pose (skinned skeletal mesh), it would be more helpful and efficient to support a “hardware functionality,” with multiple pipelines for the same stage. It’s more hardware and the API should be capable of specifying the use of this feature.

Suppose a high resolution character needs to be rendered in a scence multiple of times each with different pose (skinned skeletal mesh), it would be more helpful and efficient to support a “hardware functionality,” with multiple pipelines for the same stage. It’s more hardware and the API should be capable of specifying the use of this feature.
How would instancing help you there? The GPU will still need to transform the same number of vertices and fetch the same number of attributes, with or without instancing.

The only thing you save is some time in glDrawElements(), which is cheap anyway.

Instancing only really makes sense for D3D, where calls to DrawPrimitives*() are very expensive.

Mainting an internal cache for the core vertices and their associated data, then pulling from that cache into a multiple goemetry/rasterization pipelines.

Mainting an internal cache for the core vertices and their associated data, then pulling from that cache into a multiple goemetry/rasterization pipelines.
Yeah, they won’t be doing that anytime soon. D3D instancing doesn’t even do that.

But, once render-to-vertex-array is online, you can implement it yourself.

implement it yourself …
what kind of unprofessional language you’ve got
And D3D is not a standard measure to say "not even D3D does so …

D3D is a reflection of what is implemented in the silicon, so we use it as a measure of what the hw can do.

Proper vertex cache utilization can improve performance in a significant way, so this suggestion isn’t bad.

implement it yourself …
what kind of unprofessional language you’ve got
If you say so…

It’s quite reasonable to implement it yourself. If you have a situation where you have many large models that need to be rendered in many places, but each instance can be mostly T&L’d with only a positional (possibly orientational) offset to to each vertex afterwards, then there is the possibility for a performance boost.

Note that this is not a common occurrance. Certainly most applications have no need of it. And even games only have the occasional need for such a thing (trees, etc).

And D3D is not a standard measure to say "not even D3D does so …
My point was that D3D’s instancing support doesn’t feature it. If their instancing doesn’t do it, what makes you think that OpenGL has such a need?