Performance and Vertex Array Range

Hi,
I have several questions to ask you:

  1. What is the highest triangle rendering rate that you have been able to achieve using your OWN OpenGL code (with Geforce4)?
    I have only been able to achieve 11.5 Million triangles per sec with Vertex Array Range (with video memory). The machine is a dual Athlon 1.2 GHz, 2 GB memory and
    Geforce4 TI4600. No triangle strips.
    This seems low for a Geforce4 right?
  2. I can’t seem to allocate AGP memory. I tried the Nvidia demo AGP memory program that returns how many AGP memory available in the system, and it returns 0.
    Is there a special patch that I have to install?
  3. Have anyone been able to use Vertex Array Range under Linux (Red Hat 7.2)? glXAllocateMemoryNV always return NULL for me. I tried allocating AGP memory and video memory. It doesn’t matter.
    We have installed the new nvidia driver (version 2960).

Thank you,

Budi

  1. is directly related to 1).

You probably need to get decent AGP drivers for your motherboard. If it’s a VIA chip set, try getting the latest 4-in-1 from off the VIA web site (www.via.com.tw) and install that. If some other vendor, then go to their site.

I’ve even had the case where a friend bought an Athlon machine from Micron, and it came loaded with the wrong AGP drivers. He thought I was crazy when I told him his brand new machine was a piece of crap and needed an upgrade :slight_smile:

20.5mil now on a gf2mx it just wont go any higher scotty

>> I can’t seem to allocate AGP memory. I tried the Nvidia demo AGP memory program that returns how many AGP memory available in the system, and it returns 0.<<

i used to have the same problem with my celeron + motherboard the solution was as jwatte suggested (if u have a via motherboard) get the new 4-1 divers