I am using GLStudio 3.2 to generate an OpenGL 2.0 application running at 30Hz.
It is compiled under XP SP 3, Visual Studio 2005 SP1 C++
On the desktop ( AMD Athlon 64X4 Core 5200+ 2.6 Ghz, ATI X1300 ) it takes about 5% CPU.
On the target platform ( XP SP 3, Dual Core AMD Opteron 2.2Ghz, Tyan S2927 motherboard with NVidia ethernet & PCI drivers, 3 NVidia FX5200 PCI with Dual View enabled using Nvidia 175.19 driver July 08 (latest I’ve found ) ) one instance takes 50% CPU.
The application is an simple user interface, fairly static data and simple graphics that don’t move much. It does a lot of the embedded GL Studio function for populating lists, setting highlights in the lists etc.
I’ve seen some postings that Nvidia Dual View has issues with OpenGL. (ref From the Nvidia developers forum http://developer.nvidia.com/forums/index.php?showtopic=2257
)
Dumbing down the video cards to single monitor mode has not helped.
The FX5200 uses the nView control panel and I did not find the Threaded Optimizations there. I am getting a Forceview Control Panel and will let you know.
OpenGL 1.1 was being forced by using the XGI as the primary. I guess the primary card controls the OpenGL Level for all cards.
Still have performance problems tho.
you found-out the reason. Here’s the HW vendor’s description of their Volari: "it offers a high value, reliable graphics solution for server and cost conscious systems. Why pay for something you don’t need? "
In other words, it’s a RAMDAC. No acceleration whatsoever. An ATi Rage3D from 1995 will murder this card in performance.