Speed: Linux vs. W2000

Almost all my OpenGL work until now has been in Linux, and I’m just starting to do some Windows development. I like to use SDL, and so I have compiled the SDL test program testgl.c in both OSs. This simple demo paints colors on the sides of a rotating cube. The W2000 version does 60 fps, the Linux version does 600 fps. (P4 1.8GHz, GF4 Ti).

I am a Linux fan, but I can’t believe this speed difference. Can someone explain the enormous discrepancy? My concern is that if there is something slowing the W2000 version down I want to know how to avoid it in my application.

thanks
Gib

Have you disabled vsync under Windows?

No, I haven’t. How do I do that?

Gib

Varies depending on video card.

For NVIDIA drivers, go here:

Start -> Control Panel -> Display -> Settings -> Advanced -> NVIDIA Tab -> OpenGL Settings -> Vertical Sync

OK, I found it. That certainly makes a difference - W2000 now gives 2300 fps. It looks like ****e, of course. In fact, even when I put a 10 ms delay in (-> 100 fps) it still looks a bit jerky. Now I wonder why Linux gives only 600 .

Anyway, it shows that I don’t have to worry about Windows speed.

thanks
Gib

Originally posted by gib:
[b]OK, I found it. That certainly makes a difference - W2000 now gives 2300 fps. It looks like ****e, of course. In fact, even when I put a 10 ms delay in (-> 100 fps) it still looks a bit jerky. Now I wonder why Linux gives only 600 .
Buffer management, perhaps. Copy vs pointer flip or something.

> […] it still looks a bit jerky […]

This is not that surpising, your display is limited to 60 Hz (bad for eyes btw, pump that up ASAP and you’ll soon wonder how people can stand watching 60 Hz displays…).

If your render happens at 2300 Hz with no frame synchronization, it means the frame content changes almost 40 times in the time it takes your screen to get refreshed once, so you may get all kinds of stroboscopic effects.
It could also be an issue in your timing code (measure FPS frame to frame, and not each frame separately).

Yes, I realize that the 60 fps was just the monitor refresh rate, which I’d forgotten to set back to 85 after installing the latest nVidia driver.

I measure frame rate by rendering a large number of frames then dividing total time by number of frames.
Gib

If you’re using the SDL timer, you’ll get sucky timer resolution under Windows. Under linux SDL uses gettimeofday(), which will give you 1 ms resolution (the SDL API doesn’t give you better resolution than that), but the Windows version uses timeGetTime (or GetTickCount(), don’t remember which, they should be equal though), which usually has a resolution of 10-50 ms.

Example: If your timer has a resolution of 50 ms, the best effective FPS figure you would get would be 20 FPS, since several frames will be rendered with the same “time stamp” (your motion will only be updated 20 times per second).

Conclusion: Linux => max 1000 “effective FPS”, Windows => max 20-100 “effective FPS”.

Oh, by the way - do you use the same resolution and color depth under Windows as under Linux? That could explain the speed difference.

[This message has been edited by marcus256 (edited 01-22-2003).]

Sorry, I don’t follow that. I run the program for, say, 10 seconds. Within that time N frames are drawn, therefore frame rate = N/10 fps. Are you saying that the SDL timer can’t measure 10 seconds accurately?

Gib

Originally posted by gib:
[b]Sorry, I don’t follow that. I run the program for, say, 10 seconds. Within that time N frames are drawn, therefore frame rate = N/10 fps. Are you saying that the SDL timer can’t measure 10 seconds accurately?

Gib[/b]

no, but if your movement is timebased, it is jerky because it only updates about 20 times a second due bad timer.

Marcus, I did have different color depths. Running both with 24 bpp and 1024x768 fullscreen I get:
Linux 620 fps
W2000 1550 fps

The difference seems to be from SDL_GL_SwapBuffers(). If I comment that out the Linux version gives 1650 fps, and W2000 as far as I can tell gives about the same (suppressing the buffer swap produces a strange premature program termination with W2000).

Of course, the speed difference, though high in % terms, is purely academic as far as actual use is concerned.

Gib

OK, I see that Marcus’s point was about the jerkiness. I slow the test program down by including SDL_Delay(10), i.e. a nominal 10 ms delay. I guess the point is that in Windows the delay can vary quite a lot, which is why the frame rate I get in this case is 92 fps. But I have to take issue with his statement that SDL under Linux can give 1 ms resolution. I believe Linux can be tweaked to use a 1 ms time resolution, but as standard it is10 ms. It is a minor annoyance, since 10 ms is a big chunk of CPU time these days.

Gib

Originally posted by gib:
I slow the test program down by including SDL_Delay(10), i.e. a nominal 10 ms delay.

But, wouldn’t that force your program to run at max 100 FPS (1/10ms=100Hz)??? Do you really SDL_Delay(10) for every frame??

What I meant with the timer resolution was that I thought you used SDL_GetTicks(), which is supposed to return the current time in millisecond ticks (relaive to SDL initialization). If you for instance do this:

ticks = SDL_GetTicks();
glLoadIdentity();
...
glRotated( 0.01 * (double)ticks, 0.0,1.0,0.0 );
...

…then the rotation angle will only change 20 times per second (if you have a 50 ms resolution timer), making it appear as if the animation is running at 20 FPS.

But I have to take issue with his statement that SDL under Linux can give 1 ms resolution.

Yes, we were talking about different timers The average Linux sleep (SDL_Delay) interval is much bigger than 1 ms, usually. I have these measures (same computer):

Linux: 20 ms
Win2K: 10 ms
Win98: 5 ms

Of course, this varies between systems and different load situations.

Regarding SwapBuffers, you are probably correct. Your Linux drivers and Windows drivers probably perform the swaps differently. And as you said, this is really not an issue for real world situations, given that the overhead of a buffer swap is less than 1 ms while most “real” scenes take at least 10 ms to render.

I think you will not get useable results by comparing 2 programs that just do nothing (ie, that seems to display quite nothing to get ftp>500).

You should compare real situation. On my computer, I get significantly higher framerate under Linux than under 98/XP.

btw: marcus are you sure that SDL uses gettimeofday?If it does then it should give microsecond accuracy.At least it says so in the man page.

Marcus: Regarding the SDL_Delay(10), I was just playing around with the little demo, while getting up to speed with OpenGL on Windows. I now understand that your comment on resolution was not referring to SDL_Delay(). Still, I’m surprised that you can detect changes in time down to 1 ms with SDL_GetTicks() on Linux. At some point I formed the impression that it was accurate only to 10 ms, and I’ve been working under that assumption. I must have another look at it.

tpfsly: This wasn’t intended to have general significance as a comparison between Linux and Windows, I was just trying to understand the differences I observed in my tiny demo program. My initial post was about Linux being dramatically faster than W2000 - that turned out to be a result of my ignorance of the vsync issue. Actually, since so much of the heavy lifting is done in the GPU these days, I’d be surprised if there is much difference in OpenGL performance between the two OSs. But being a Linux enthusiast, I’d be happy to hear that the Penguin waddles faster .

Gib

I recall reading something a while ago during NVIDIA’s 0.9.x series of drivers for linux when they introduced page-flipping for X. It remember it only used page flipping for certain desktop/fullscreen resolution combinations. I don’t know if this still applies with the latest drivers. Or of course, I could be totally mistaken. But it may be what you are seeing. A block copy vs. a page flip would surely show major speed differences.

Yeah, SDL uses gettimeofday(). You should get millisecond precision, or microsecond, as zen says. Man it and see what it says ( I would but not in Linux at the moment ).

Old GLman

[This message has been edited by Old GLman (edited 01-23-2003).]

Originally posted by gib:
I’d be surprised if there is much difference in OpenGL performance between the two OSs.

So am I. I’m pretty sure it comes from not enabled SBA and/or fast-write under
Windows. Now I don’t know how to force this switched on. But I’d love to do so.
I get this speed difference on NVidia cards - GF2MX and NForce (I don’t have
any ATI)

Note: there is no speed difference when my 3D engine is really slowing down because of stencil shadowing.