ATI_vertex_array_object

I am trying to pert my engine for Radeon. So I read ATI_vertex_array_object ext. specs. Do I understand it correctly: the guys don’t have support for interleaved arrays???

P.S. VAR is much much better solution…

I don’t think VAR is a much better solution than VAO : it introduces issues that should normally be handled by the driver (e.g. synchronization).

You may be interested by the new ARB-approved vertex_buffer_object extension.
http://oss.sgi.com/projects/ogl-sample/registry/ARB/vertex_buffer_object.txt
You will find that its design is much closer to VAO than VAR.

Does anyone know where to find a demo on VAO performance?

When nv launched VAR/FENCE they posted a simple demo. On a NV15, the demo was impressive (VAR/FENCE archieved over 70% more performance on my system) so I would like to know if anyone knows where to find the same thing for VAO.

I also agree that VAO is actually the best. Too bad I don’t like the function names. I am going to look at the ARB spec - hope I will like them…

http://www.fl-tw.com/opengl/GeomBench/

The source code is there too.

There’s a demo on ATI website too, but i wouldn’t call it “demo on VAO performance”, as it only displays a single quad (how nice, using VAO for 4 vertices, hug? what did they smoke?).

Do I understand it correctly: the guys don’t have support for interleaved arrays???

Interleaved arrays are no problem with VAO, what makes you think it’s not possible ?

Y.

And how should I define an interleaved array in VAO?
The pointer bindings do not include interleaved arrays(as I read).

B.R.W check out the streaming demo an www.delphi3d.net Is it possible with VAO?

struct MyInterleavedVertex
{
float xyz[3]; // Offset = 0
float normal[3]; // Offset = sizeof(float) * 3
float uv[2]; // Offset = sizeof(float) * 3 * 2
};

MyInverleavedVertex *verts;

/// creation:
GLuint vaoObj = glNewObjectBufferATI(nbVerts * sizeof(MyInterleavedVertex), verts, GL_STATIC_ATI);

/// rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);

glArrayObjectATI(GL_VERTEX_ARRAY, 3, GL_FLOAT, sizeof(MyInterleavedVertex), vaoObj, 0);
glArrayObjectATI(GL_NORMAL_ARRAY, 3, GL_FLOAT, sizeof(MyInterleavedVertex), vaoObj, sizeof(float) * 3);
glArrayObjectATI(GL_TEXTURE_COORD_ARRAY, 2, GL_FLOAT, sizeof(MyInterleavedVertex), vaoObj, sizeof(float) * 6);

glDrawElements(…)

glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);

/// destruction:
glFreeObjectBufferATI(vaoObj);

Not guaranteed, it’s from the top of my head.

Yeah, streams are also possible. There’s very few limitations to VAO.

Y.

[This message has been edited by Ysaneya (edited 03-19-2003).]

It’s cheating!!!

But thanks for the demo. I don’t thing it would perform very well(memory boundary issues and so on), but you prooved it to me :slight_smile:

Yes, does someone knows when ARB buffers will be supported by our drivers?

Thanks.

Cheating ? It’s the official, recommended way of doing it. I fail to see how it’s cheating.

Slow? Alignment issues ? The vertex is aligned on 32 bytes which is probably the best alignment one can get when using vertex arrays.

I fail to see why you’ve got a problem with that solution :slight_smile:

Y.

ARB_vbo should be present in drivers very shortly. It is already supported in NVIDIA beta drivers (43.30 if I’m not mistaken) (the extension is not listed in the extension string, but you can initialize function pointers anyway). Not too sure about ATI, but I guess it will come quickly (Catalyst 3.3 probably).