EEK! 100fps->40fps!

Originally posted by Eric:
[b] OK, I’ll try to explain this one…

Usually, how do we calculate our FPS ???

  1. We take current time t1.
  2. We render the scene.
  3. We call glFinish.
  4. We take current time t2.
  5. We call wglSwapBuffers.
  6. Go back to 1)

Then, one frame took t2-t1 to be drawn. So you can have 1/(t2-t1) frames per seconds (if t1 and t2 are in seconds of course ! Otherwise, simply convert them !).
[/b]

Very bad way of calculating FPS (and a very commonly made mistake, I must add), as you are going to get inaccurate readings. Whats wrong with this picture? Lets look at a theoretical example here (numbers pulled off the top of my head).

>1) We take current time t1.
Lets say this happens at time 0ms.
t1 = 0
>2) We render the scene.
>3) We call glFinish.
Lets say these 2 steps take 250ms.
>4) We take current time t2.
t2 = 250
>5) We call wglSwapBuffers.
Lets say we “just missed” our vsynch, so this takes 250ms (yes, we’re going to pretend our refresh rate is set to 4hz. It may make you blind, but it makes my calculations easy).
>6) Go back to 1)

t2-t1 = 250-0 = 250ms = 0.25 seconds

>1/(t2-t1) frames per seconds
1/0.25 = 4FPS.

So, we calculated that your app is running at 4 frames per second. But WAIT!!! Each rendering loop takes 250ms to render, plus 250ms to flip. Thats 500ms, or 1/2 second per frame. You are actually only getting 2FPS, but your faulty counter code just told you DOUBLE the actual value. See how much the way you time makes a difference? This is why TheGecko was completely off base when he said:

Now, whether my counter code is wrong or not,this still implies a big frame rate drop.

As soon as your counter code is the slightest bit wrong, your readings begin to mean NOTHING!!! Also note that in the above scenario, you arent taking into account anything other that graphics. If you have AI, physics, network code, etc… and you place them OUTSIDE of your counter loop (like you did with the swap) you are going to exagerate your FPS even more. Whats the moral of the story? The moral is that every single instruction the executes each frame neads to happen INSIDE of the counter loop. The only true way to do that is to make your counter code span from one frame to the next.

The correct way to time your app is:

lastFrameEnd =0
BEGIN RENDER LOOP
frameEnd = NOW
FPS = 1/(frameEnd-lastFrameEnd)
lastFrameEnd = frameEnd
RENDER SCENE (and draw your FPS counter)
SWAP BUFFERS
END RENDER LOOP

Now this will give you truely accurate results (except for your first frame obviously, but it’s meaningless in the first frame anyway).

Oooooooppppppppsssssssssssss…
Sorry…
I was wrong…

As a matter of fact, I do it properly in my programs but I guess I wasn’t too awaken when I posted the message !

Thanks for the pointing out the mistake !

Regards.

Eric

Of course I forgot to mention another common, and probably the worst timing mistake you can make. After rendering a frame, and you have your start time (t1) and end time (t2) in milliseconds:

FPS = (t1-t2)

Then when you do some optimizing, you remove a whole bunch of useless code, and suddenly your app drops from 85FPS to 50FPS. Then you go “WHAT??? I MADE IT FASTER, BUT ITS RUNNING SLOWER??? WTF!!!”

Then someone tells you “uhhhh, you forgot to invert your timer and convert milliseconds to seconds”

[This message has been edited by LordKronos (edited 03-30-2001).]

OK that clears up alot of things

I’m going to post up my timing code for you guys to take a look at.I got this timer class from a very reliable source.

As to that whole driver bug thing,I guess that makes sense.For those of you who are wondering what my app is doing, the answer is: NOTHING! And I’m not using GLUT.I wrote the whole window code myself and I have a rendering loop that just keeps swapping buffers and writing out the FPS info.And, like I said,all I’m getting in ~60fps. Now, if what Asshen Shugar said is true,then I don’t have much to worry about (And yes, I do have the latest detonator drivers from nVidia)

Anyway,here’s my timer code.If any of you do spot a bug,please let me know!

#include “EnigTimer.h”

//////////////////////////////////////////////////////////////////////
// Construction/Destruction
//////////////////////////////////////////////////////////////////////

//-----------------------------------------------------------------------------
// CTimer()
//-----------------------------------------------------------------------------
CTimer::CTimer()
{
// We need to know how often the clock is updated
if( !QueryPerformanceFrequency((LARGE_INTEGER *)&m_TicksPerSecond) )
m_TicksPerSecond = 1000;

m_fFps = 0;
m_bRunning = false;
}

//-----------------------------------------------------------------------------
// Start()
// Reset counter and start the timer
//-----------------------------------------------------------------------------
void CTimer::Start()
{
// Get the current time so we know when we started
if( !QueryPerformanceCounter((LARGE_INTEGER *)&m_BaseTicks) )
{
m_BaseTicks = (UINT64)timeGetTime();
}

m_bRunning = true;

m_fLastUpdate = 0;
m_dwNumFrames = 0;

m_fFrameTime = 0;
m_fDeltaTime = 0;
}

//-----------------------------------------------------------------------------
// Stop()
// Stop the timer
//-----------------------------------------------------------------------------
void CTimer::Stop()
{
if( m_bRunning )
{
// Remember when we stopped so we can know how long we have been paused
if( !QueryPerformanceCounter((LARGE_INTEGER *)&m_StopTicks) )
{
m_StopTicks = (UINT64)timeGetTime();
}

  m_bRunning = false;

}
}

//-----------------------------------------------------------------------------
// Continue()
// Start the timer without resetting
//-----------------------------------------------------------------------------
void CTimer::Continue()
{
if( !m_bRunning )
{
UINT64 Ticks;

  // Get the current time
  if( !QueryPerformanceCounter((LARGE_INTEGER *)&Ticks) )
  {
  	Ticks = (UINT64)timeGetTime();
  }

  // Increase baseticks to reflect the time we were paused
  m_BaseTicks += Ticks - m_StopTicks;

  m_bRunning = true;

}
}

//-----------------------------------------------------------------------------
// GetTime()
// Get the current time
//-----------------------------------------------------------------------------
float CTimer::GetTime()
{
UINT64 Ticks;

if( m_bRunning )
{
if( !QueryPerformanceCounter((LARGE_INTEGER *)&Ticks) )
{
Ticks = (UINT64)timeGetTime();
}
}
else
Ticks = m_StopTicks;

// Subtract the time when we started to get
// the time our timer has been running
Ticks -= m_BaseTicks;

return (float)(__int64)Ticks/(float)(__int64)m_TicksPerSecond;
}

//-----------------------------------------------------------------------------
// Frame()
// Call this once per frame
//-----------------------------------------------------------------------------
void CTimer::Frame()
{
m_fDeltaTime = GetTime() - m_fFrameTime;
m_fFrameTime += m_fDeltaTime;

// Update frames per second counter
m_dwNumFrames++;
if( m_fFrameTime - m_fLastUpdate > FPS_INTERVAL )
{
m_fFps = m_dwNumFrames / (m_fFrameTime - m_fLastUpdate);
m_dwNumFrames = 0;
m_fLastUpdate = m_fFrameTime;
}
}

//-----------------------------------------------------------------------------
// GetFps()
//-----------------------------------------------------------------------------
float CTimer::GetFps()
{
return m_fFps;
}

//-----------------------------------------------------------------------------
// GetFrameTime()
// This is the time when Frame() was called last
//-----------------------------------------------------------------------------
float CTimer::GetFrameTime()
{
return m_fFrameTime;
}

//-----------------------------------------------------------------------------
// GetDeltaTime()
// This is the time that passed between the last to calls to Frame()
//-----------------------------------------------------------------------------
float CTimer::GetDeltaTime()
{
return m_fDeltaTime;
}

//-----------------------------------------------------------------------------
// IsRunning()
//-----------------------------------------------------------------------------
bool CTimer::IsRunning()
{
return m_bRunning;
}
//-----------------------------------------------------------------------------

[This message has been edited by TheGecko (edited 03-30-2001).]

Yes, there is a bug with Nvidia and w2k that prevents opengl apps from using more than a 60Hz refresh rate (unless a workaround is used). So yes, this is the difference

Originally posted by thewizard75:
Yes, there is a bug with Nvidia and w2k that prevents opengl apps from using more than a 60Hz refresh rate (unless a workaround is used). So yes, this is the difference

Thats bull, because my app has always run at 75 FPS with 75Hz refresh & vsynch enabled on Win2K with a Geforce2 GTS 64MB and any of the 6 or 8 drivers I have tested so far.

I have to agree with LordKronos; that is an absolute pile. My OpenGL app runs at over 2000fps on 7.68 nVIDIA reference drivers under 2k now. It runs that fast as it does practically nothing, not 'coz I’m a great coder or hack drivers or even 'coz I know what VSYNC is…

The nVIDIA guys would have sorted that out immediately. It’s really quite simple:

  1. Get the 7.68 Detonators…
  2. Install DirectX 8.0a… (won’t help OGL performance but will stabilise the 7.xx drivers as they’re optimised for DX 8, not 7)
  3. If you’ve got a VIA board, get the patch from MS…

I’m sorry to say that a lot of people have been posting utter rubbish in response to this problem. 75 fps with 60Hz VSYNC! I don’t think so!!

If you want more information on any of the above then post onto the video cards forum @ http://www.hardwarecentral.com

Stephen

Just a quick note in regards to the “If I can’t get more than 60fps why did I buy this card?” comment:

If your program is capable of rendering at 200+ fps with vsync disabled, then you can add a lot of functionality. Say you add so much functionality that the frame-rate halves. You now have an application which can display 100fps - with no visual difference in performance. The monitor still refreshes at 60Hz, you still have 60fps.

If you wish to /see/ more frames per second, you need to do two things:

  1. Turn up the refresh rate for your monitor (75Hz, perhaps?)
  2. Get a bionic eye, because standard-issue human eyes have difficulty distinguishing the performance when you get to this level.

Also note that there is a reason ‘vsync’ exists. The monitor redraws the entire screen ‘X’ times per second (where ‘X’ is the refresh rate we’ve been touting about). When you throw data into the frame buffer in the middle of a vsync, the image on one part of the monitor will not correspond to the image on the rest of the monitor. ‘vsync’ waits for the moment when the ray gun in your CRT is moving from the end of its refresh to the beginning of the next, and updates memory in that window of time. The monitor gets the whole image without discontinuity this way.

– Thae

May I direct your attention to http://www.geforcefaq.com/#sw:drv:refresh

Yes, the Nvidia drivers do have refresh rate problems in w2k.

Originally posted by thewizard75:
[b]May I direct your attention to http://www.geforcefaq.com/#sw:drv:refresh

Yes, the Nvidia drivers do have refresh rate problems in w2k.[/b]

Well, lets see…your original post is:

there is a bug with Nvidia and w2k that prevents opengl apps from using more than a 60Hz refresh rate (unless a workaround is used). So yes, this is the difference

You make it sound as if NO opengl program on ANY win2k system will EVER get more than 60FPS UNLESS a workaround is used. What this link mentions is a particular compatiblity problem with a specific piece of hardware (and by the way, it says 75Hz for this bug, not 60Hz).

The rest of the bullet points for that question have nothing to do with nvidia nor opengl.

Well then where is the problem? I’m running an OGL app (no glut) that does nothing except draw the FPS on screen.And all I’m getting is 60fps with VSYNC on at 75Hz refresh rates.Where is y other 15fps going. I just need a simple answer.(By the way, I don’t think my timer code is wrong.I’ve posted it above)

oh I almost forgot.I’m using whatever drivers I got from nVidia’s website for Win2K.I don’t wish to use any “leaked” drivers

Originally posted by TheGecko:
Well then where is the problem? I’m running an OGL app (no glut) that does nothing except draw the FPS on screen.And all I’m getting is 60fps with VSYNC on at 75Hz refresh rates.Where is y other 15fps going.

Question…are you changing the screen resolution? If so, you might be changing it to 60Hz without even realizing it. Can you post the app and source somewhere where I can download it and try, because it seems fine to me.

Hmm…I am actually.My desktop is at 1280x1024 but my app resizes to 1024x768. I’ll post a link to my app here as soon as I get back from work.

If you manually change your desktop resolution (in Windows) to the resolution your application runs at, and then modify your refresh rate (or modify the target resolution’s refresh rate with a handy application such as comes with the ASUS GeForce2 boards), then you can be sure of the refresh at that resolution…

It seems strange that the default rate for ‘all’ resolutions is not more easily or intuitively modified than it is… shrug MS doesn’t expect people to muss with system settings very often I guess.

– Jeff

how do you get 2000 fps + ??
in 640 x 480, fullscreen, no vsync, rendering nothing but my fps, i get 1130 fps.
thats with a geforce2 ultra.
what did you do to get 2000+?
one more thing im using the latest leaked nvidia drivers. 11.1 i believe.

id also like to point out that on my configuration, if im in 640 x 480, vsync has
no effect. does anyone know why?

thanks - Dave

[This message has been edited by Warrior (edited 04-03-2001).]

Probably coz I’m not using GLUT.

I’ve written my own interaction loop so that I don’t have to use GLUT’s incredibly slow one.

Also, I have built a very cut down window class. I dunno what machine spec your GF 2 is running on but mine’s just a 256 ddr on a 1ghz athlon (nothing overclocked).

Stephen

And another thing…

the 7.68s are official drivers; they’ve been approved my Microsoft and are to be released very soon. I get them early as I’m a registered nVIDIA developer so they are not illegal!

Stephen

nah im not using glut either.
but youve got ddr ram, so thats probably why its so fast im guessing.

does microsoft approved drivers just mean it has their digital signature?? if so, where do you download these approved drivers, cause the reference drivers on the nvidia site dont have a microsoft digital signiture.

thanks- Dave

[This message has been edited by Warrior (edited 04-04-2001).]

They’ve been signed by the WHQL so make of that what you will.

I’ve just checked nVIDIA’s developer driver list and can’t find them there; as I said in an earlier post, they’re due out pretty soon. So I probably got them from hardwarecentral. Go to the video card discussion forum and just do a search for 7.68 and you’ll find them

Hope it helps,

Stephen