ATI 7200: What is its native vertex format

My app draws a 350x350 grid of 2d triangle strips with a 1d texture map for contouring. On my 400MHz Celeron system with an ATI 7200 series card, the performance drops dramatically when I use vertex arrays or vertex buffer objects instead of display lists. On my 2.26GHz P4 with an ATI 9200 card, display lists and VBOs are the same speed.

I put a win32 GLUT test app with source out on my website: http://users.adelphia.net/~mikegi/glutcheck/

Anyone have any ideas how to eliminate the terrible performance with vertex arrays and VBOs?

Thanks

Something’s wrong with your app, I can get only 39 fps on FX5900 at different clock frequencies & all possible render modes.

Use floats for vertices.
Use floats for normals.
Use floats for texture coordinates.
Use a float for the fog coordinate.
Use unsigned bytes for colors and - if you use vertex colors at all - always specify full RGBA, not just RGB.

edit: Just seen your code.
You’re using 2d vertex positions. Try using 3. Might help.

[This message has been edited by zeckensack (edited 02-10-2004).]

I have tried just about every combination of vertex formats for the array and VBO that I can think of:

  1. x, y with separate t array
  2. x, y, z with separate t array
  3. packed x, y, t
  4. packed x, y, z, t
  5. packed t, x, y
  6. packed t, x, y, z

The fps will not budge. What is the format that the driver uses for a display list? By watching my app’s memory consumption at runtime it looks like the display list compiler is converting my x,y,t coords to x,y,z,t.

Originally posted by zeckensack:
[b]Use floats for vertices.
Use floats for normals.
Use floats for texture coordinates.
Use a float for the fog coordinate.
Use unsigned bytes for colors and - if you use vertex colors at all - always specify full RGBA, not just RGB.

edit: Just seen your code.
You’re using 2d vertex positions. Try using 3. Might help.

[This message has been edited by zeckensack (edited 02-10-2004).][/b]

That definitely is strange. I posted a message on this problem on comp.graphics.api.opengl and one person replied with very good numbers on a “P4 2.4 / GF4800SE using Det 45.33” drivers (DL and VBO gave 195 fps).

This is my first GLUT app so its definitely possible that I screwed up something. I used glut samples found on the web as a starting point. However, this test app is using the same algorithm that I use in my purely win32+ogl app and both get about the same fps on my system.

Originally posted by M/\dm/
:
Something’s wrong with your app, I can get only 39 fps on FX5900 at different clock frequencies & all possible render modes.

It’s definitely strange and I don’t know what’s going on with your VBOs.

A few thing’s I’ve noticed in your code (even though I don’t think they explain your issues).
1)You’re creating the texture every time you render a frame, and destroy it afterwards. The glTexImage family isn’t particularly efficient. You should keep texture objects alive as long as you can still use them.

2)When rendering from the display list, you effectively bypass issue 1, because you compile the texture generation into the display list, along with the actual geometry. So one could say your numbers are skewed in favor of the display list, because the other rendering methods have to do more work.

3)you call Display from your idle function. You should instead call glutPostRedisplay there, and let GLUT handle it. There may be other events waiting to be processed, or your window may be completely invisible.

On second thought, issue #2 might trigger some special driver behaviour for "ancient GL applications"™. In absence of any further ideas, can you try disabling the complete display list path in your code, and also skip the display list creation?

Well, I finally got to the bottom of this problem. I had to resort to using DirectX8 vertex buffers to find the exact format the 7200 accelerates. It is: [ x, y, z, u, v ]. When I created a d3d vb with a single texCoord, [ x, y, z, u ], I got the same results as in OpenGL: 1 fps. When I added ‘v’ the rate shot up to 70 fps.

I don’t know what’s wrong with the ATI OpenGL driver. After finding this vast speed improvement, I modified my app to create the proper format. It did help some on the 7200 but I never got close to 70 fps. Worse, the change slowed down perf on my 9200.

I’ve updated my website with more info. It now includes the source and Win32 exe for the DX8 app:
http://users.adelphia.net/~mikegi/glutcheck/