optomizing for the GeForce

Hi all,

Just a quick note to say that I’ve updated my site http://www.therealitydysfunction.com/ with my latest project “PlanetView” and its demo .exe.

Whats really bugging me is that it is only managing 8-10 FPS on a P3 800 with a GeForce256, I thought GeForce cards where meant to be able to shift allot more than 250’000 polys before keeling over? Especially as its a static mesh I’m drawing using displaylists and GL_COMPILE I’d appreciate any helpful comments or suggestions about how to speed it up, theres no lighting code yet, no alpha, no textures just 250’000 polys being drawn in a display list using GL_TRIANGLES…



I’d hazard to guess that you are simply processing too many triangles that aren’t visible. For a terrain you need to use some type of VSD and/or mesh LOD. Also for a Geforce, I’d probably look into processing the terrain as one or more triangle strips in a ranged vertex array.

[This message has been edited by DFrey (edited 11-07-2000).]

  1. Try using vertex arrays. With appropriate format for vertex coordinates (e.g. 3 floats /vertex), vertex colors (4 unsigned chars /vertex), and texture coords(2 floats/vertex).
  2. Try using nv_vertex_array_range and nv_fence extensions. To place your vertex arrays in agp or video memory.

You can find more info on Nvidia’s developer site. www.nvidia.com/developer

Hope it helped.

You should also look into real time terrain navigation algorithms. Lindstrom and ROAM seem to be the mainstream. In the link bellow you’ll find the links to the relevant papers.


Thanks for the replies guys,

So NV_vertex_array is faster than a display list for a static mesh? That seems a bit wierd bu I’ll give it a try! About drawing using TRIANGLE_STRIP can you use it to draw any form of mesh? As the reason I didn’t use it was that I didn’t think it would work for a mesh like mine thats store in a 2d array I suppose I could move it to a vertex array and do a bit of fancy ordering algorithm and it ‘should’ come out as a triangle strip…