Hey everybody, I’ve been bringing in my OpenGL demo’s to show my coworkers here at the job. I work on a software test team, so we’ve got >150 machines in our test lab, including some nice beefy Windows boxes.
So I’m running my latest demo on a couple different machines here in the lab, and the performance difference is almost unbelieveable. Here’s the setups:
Dual Athlon XP 1800+
1 or 1.5 gig RAM
Tyan Tiger Mobo (S2640)
Both boxes have 32 meg Radeons in them, with the same driver (6166, latest supported from Ati). OpenGL versions are identical, as are supported extensions.
The differences in the 2 boxes, one has WinXP Pro, the other has Win2k Server. The only other difference is that one is a Radeon SDR, one is a Radeon DDR.
Now, I’d expect some performance difference there, even up to 150% greater performance out of the DDR. But I’m seeing a framerate of 11 fps on the SDR, when the DDR is showing 60fps (can’t get vsync disabled on the XP box). On my Radeon 8500 at home (win2k pro, 1.1 athlon), I get ~750 fps with the same demo, so I’m sure the DDR box here at work is getting well over 60, it’s just locked by vsync.
Does anyone have an idea why the SDR would have such low performance? I mean, at least a 600% performance diff between the DDR and SDR is just blowing my mind. Is there something in Win2k server that’s the bottleneck?
Thanks for any input you might have.