Inno geforce 3 video card

This week I bought a 64 meg INNO geforce 3 video card.

When I run 3dmark, its benchmark is 800% higher than my old tnt2 card, but when I run my game code, I don’t appear to be getting any massive leap in performance.

Machine is pentium III 800Mhx, 384 meg ram,
running win2k professional.

On the TNT2 card, I was getting 20-30 fps,
depending how much was on screen, and I am still getting around that now.
(The code has it’s framerate inhibitor code disabled for the tests ).

I have tried innos latest drivers, and also nvidias…and neither improved the opengl performance.

I am struggling to explain why i am not seeing any speed increase…

Anyone had similar experiences?


Just reading a couple of similar articles…
A few things in there I can try.

… your application is CPU limited and 3DMark is not ?!

hard to say without more informations about your application.


Unsure what you mean by this, but I have no framerate inhibiting code in there, so I would expect the game to play a little smooth/faster due to the extra performance of the geforce card…


Ok, enable PFD_GENERIC_ACCELERATED in my pixel format descriptor…no change in framerate.

I have a P2 733Mhz,windows NT 4.0 box at work, running an intel motheboard with graphics card in it, and it is whipping my pentium III 800Mhz, with geforce 3 for framerate.

The geforce III machine has Anti aliasing switched off, and vertical syncing is on.

Is there any way of determining whether I am using the geforce accelerated mode, and not using software rendering?


if its software you won’t get over 1fps in any app normally… and in games even much less…

disable vsync and test again…

and if the app/game you test is cpu limited, a new gpu doesn’t mather really…

First of all, disable vertical sync. Measuring performance with vsync on is pointless. Then try to find out if your geometry, CPU or fillrate limited. Suggestions on how to do this is in the OpenGL performance guide here . You’re still getting a pretty decent framerate so I doubt you’re using software rendering but you never know. Try switching to 32bit colour and see if that helps. Are you using any´"unusual" state like stencil, polygon smoothing or destination alpha blending?

Thanks guys…

Not quite sure what you mean by cpu limited?
If you mean, the game is using the cpu 100% during each frame, it’s definitely not.
I disabled the majority of the processing intensive game code, and the framerate is still slow. I will however disable vertical blank synchronisation and retest.

The most curious thing about comparing the 2 machines, is that one is p2 733 and the other is a p3 800Mhz…i.e overall, my machines spec p****s on the p2 machine,
and yet it outperforms it.

I’m not using stencils, and am pretty confident I am not using polygon smoothing
or dest alpha blends ; as there’s a lot of code, I will check though

I will have to try your suggestions out tonight ( I’m at work at present! ), and mail back in.

Thanks again

They never made a 733 MHz Pentium II, did they?

You really ought to turn off V-sync.

Anyway, are you using vertex arrays or immediate mode? Uploading textures at the start or every frame?

What does VTune say about where you’re spending your time?

Last: if you make your window really small (like 128x96 pixels) will there be a difference? I e, you may currently be fill rate limited, and you may have decent fill rate on that TNT and have a cheap GF3 with similar fill?

[This message has been edited by jwatte (edited 08-12-2002).]

Why don’t you post a link to the particular test program(s) you are talking about? Source-code would be helpful too… there ought to be some reason why the new card isn’t handily outperforming the old one. Are you quite certain your framerate counter is accurate? If you’re not using framerate, what kind of test are you measuring the lack of improvement with?

>>Ok, enable PFD_GENERIC_ACCELERATED in my pixel format descriptor…no change in framerate.<<

1/ u cant enable it
2/ even if u could u wouldnt want to as it means dont use the card but do everything in software. (+ no i dont know why they stuck accelerated in the word)

Ok…lots of posts to reply to :

I disabled vsync last night, and am getting 28-40fps.

The pfd generic_accelerated flag idea, came from another posting on this forum…someone suggested it…so I tried it.

The frame rate is being measured using the system high performance timers i.e counting how long it takes to render 1 frame and scaling up to how many you would get in a second.

The ‘test program’ is actually my game ‘Maverick’…which is available as a 9meg download on

What is vtune?

As for the pentium II 733Mhz…well compaq seem to have got their hands on one!

The source code is epic…it would take anyone days to work out what they hell is going on…


> The frame rate is being measured using
> the system high performance timers i.e
> counting how long it takes to render 1
> frame and scaling up to how many you
> would get in a second.

That’s the wrong way to measure, as your measurement is in no way sychronized to the card. Modern card/driver pipelines may be able to buffer more than a frame’s worth of commands!

Instead, record time and frame number right after swapbuffers. Wait 10 frames (or 50, or three seconds, or whatever). Then record time again, and divide time by frames.

int frameCounter = 0;
__int64 timeStamp = 0;
double cpuFreqency = …;

void mySwapBuffers() {
SwapBuffers( myDc );
if( ++frameCounter == 10 ) {
__int64 otherTimeStamp = rdtsc();
// note this will record a very small FPS the first time
// (because of timeStamp starting out at 0)
setCurFps( frameCounter * cpuFrequency / (otherTimeStamp-timeStamp) );
timeStamp = otherTimeStamp;
frameCounter = 0;