Limiting Frame Rates


After reading this ( ) article, I was wondering if running an animation at greater then 24 frames per second is neccasary?

Assume that I am running on dedicated hardware and I can get an interrupt every 1/24th of a second.

Further assume that I can generate a motion blured image in less then 1/200th of a second (fully rendered in the frame buffer). Thus, there is no penalty to make a frame motion blurred or not.

So would this be enough to make the motion look smooth, and thus save processing time if my machine could generate 200 fps using brute force rendering?

Basically, it is like having a graphics engine generate DVD quality frames (with all the fancy effects such “correct” motion blur). Perhaps it would be just as fruitful to generate “better” frames rather than “faster” frames.


In games, motion blur would cause the game to behave erratically. An example would be playing a game like Quake II, if there was motion blur used, there would be problems calculating the exact position of an object, so it would be really tough to hit something with your weapon. With motion blur in a game, the object in question would not really exist in any of the places where the “blur” is positioned. So we have perfectly drawn frames, so objects are always able to be calculated in set places in space. So how do you simulate motion blur in a video game? Easy, have games go at over 60 fps!

In the document I referenced, I disagree that blurring a frame makes it hard to position objects. A simple solution to this is to run the game at 240 (discrete)fps and have the graphics run at 24 fps. Thus each frame is composed of 10 “logical/physical” frames. (Again, hypothetical system so there is no Operating System issues such as multitasking.)


Don’t. Frame limiters are way too hard to do right and doesn’t add anything to user experience for your efforts. VSync to the display refresh is far easier and is fine enough if you can keep your frame rate up there.

1)24 fps is not “enough” for any interactivity and has never been.
a)Don’t assume everyone’s brain is too slow to notice. There are plenty of people who do find even filmed (and thus ‘correctly’ motion blurred) DVD or cinema content choppy. Most notably when there’s fast movement (of the camera or otherwise).
b)Also think about input latency for a moment. How much time will pass between receiving user input and full realization of it on screen? How does this time interval relate to frame rate?

2)if you can indeed generate correct motion blur in 5 ms, that’s great, more power to you. However, I suspect it’s rather a motion trail, and that won’t please the eye nearly as much as a true temporal motion blur. The latter really at least requires rendering several true world updates per displayed frame, reusing older ones is physically wrong. The number of subframes required is technically infinite, in practice you can try to be clever and pick a comfortable upper ceiling and use less if there’s little on-screen movement.

This is quite pointless if you haven’t reached the display’s physical refresh rate yet. Eg 20 fps with three temporal samples per frame requires more processing power than 60 ‘real’ fps, because you spend extra bandwidth for mixing the frames. And then it will not only not look any better (if the retina and brain are indeed as slow, the blur will happen automatically), it will also increase input latency by a factor of three.

Phew, what nonsense people are able to tell when the talk about fps…
Seriously, the more fps you have, the better it is. Of course, the monitor have to follow, else it is wasted for the display (but not the other calculations of the game).

I did some experiments in openGL with 2 fast rotating and contrasted rectangles, you can try them there :

The page is not updated with the last experiments I made, but there is enought to test yourself. You can toggle vsync and motion blur to see the differences.

IMHO, the best would be to have a 100 or 120hz capable monitor, an atteignable framerate above 120fps, vsync enabled. If the card is able to do above 480 fps, put 4 motion blur subframes on each image.

So, higher display refresh and fps are nicer than equivalent motion blur, but if you can afford high display refresh + motion blur, do it.

Lastly, vsync is very nice IF the fps are always above monitor refresh rate. Else, it is better to leave it off. If you have 75hz and 60fps for example, vsync will force you to 75/2 = 37.5 fps …

(You can browse the GL forum mentionned in my web page, but please refrain the post on it, the subject was restricted to performance problems that have been solved thanks to the many contributors)

On a side note, I have seen 2 special movies projected at 60Hz : the realism it conveys is far superior to any 3d stereo projection system. I have seen it at the Futuroscope, near Limoges, in France. It is a sort of “high tech movie projection” theme park, with various Imax systems, such as 70mm frames, stereo projections, dynamic simulators, and the like.


Just to be more precise, and as I can not edit my above post :

A movie projected at 60hz convey more realism than a stereo projected movie at 230Hz. And of course, if it was 260Hz, the 3d stereo would be better.

Hi, thanks for the responses.

1)24 fps is not “enough” for any interactivity and has never been.

I disagree. 24 fps corresponds to a latency of 40 mili seconds which “I” consider to be good. Also, the game “logic” would run at 240 fps so it would not miss much user input. (Also, in some games, the user can “predicate” what to do next, so they will not have to wait for the frame to appear before they give there input).

I also I would like to remind that I was talking about a special custom made system only for the purposes of determing if motion blur could add more realism. So that system will not suffer from any effects such moniter refresh timing problems, mutli-tasking and other OS issues. And yes, it can generate “correct” motion blur (so not just motion trail) in whatever time is neccasary. This is not a general purpose system, it is custom made to meet all requirements. (Think of it as an arcade machine which has dedicated hardware). Please assume, there is no hardware issues, this is strictly about using less rather then more frames.

I also agree that the more frames per second the better but my main question is, are 24 motion blurred images better then 48 sharp images, assuming 24 blurred frames can be generated as fast as 24 static images (Assume there is parallel hardware that can “correctly” blur an image but nothing else, so nothing is gained by not using it)? This way the computer has an additional 24 spare frames for other processing. However, if 24 blurred frames are not equivalent to 48 sharp frames, then the results will obviously look bad.

I see that you guys are suggesting that higher refresh rates are better, I agree, but I was trying to see if I can buy more time for the cpu (how much is enough/neccasary, rather than ideal).

Relating this back to OpenGL, I have some programs which can perform blurring using custom made techniques, however they work best at low frequencies. Higher frequencies add more overhead to the calculations, so I was suggesting the cpu could buy more time by motion blurring frames (since it is virtually free at low frequencies). But I don’t want the visual output to suffer too much.

I meant to say “predict” not “predicate”.

Concerning your hardware system, I would say that it might be nicer to have motion-blur with lower framerates. It will both depends on the quality of your motion blur and the final framerate.

Have you tried my progs at ? What do you think about it ? Sure, the blur is simply done by accumulating several frames, not with “custom made techniques”. How do you create your custom blur, by the way ?

Talking about “40ms interactivity” : It is not so good for fast-reaction and anticipation games. A 40ms ping is a minimum to play Quake3 correctly in multiplayer. Of course, for a slower pace game, it could be enough.

Another thing with motion blur : it assumes that the eye of the observer will stay completely static. This is far from being true. During a quite fast pan, the eye try to “jump and lock” on the background, as hardware image stabilization is built-in in our eyes. Bad luck, in a movie the background is already motion blurred, so it appears ugly. With a higher frame rate, there will be less motion blur, and the eye will be happy.

[This message has been edited by ZbuffeR (edited 01-25-2004).]


I haven’t tried your programs yet because I am on a Solaris machine. I will try them once I get to a windows machine.

I was assuming that 40ms of display latency is tolerable because the input latency would around 4ms. The only requirement is that the user must know what to do next using the current image they have, since the next one will be 40ms away. This should work well in applications where prediction is possible.

The technique I am using is basically rendering “ahead”. I was assuming that if an object is moving fast, then the leading and trailing edges of the object will be blurred more then the middle of the object. The technique is not general purpose and is custom made for each object. But it looks correct.

The main advantage is that the screen is only updated rarely. That is, the program will “accumulate” (without using the accumulation buffer) 5 frames and then display the results on the 6th frame. This gives a speed up in rendering since there is no display overhead. Also, buffer and depth clears occur only once every 6 frames. The output runs at about 20 to 30 fps but internally it runs at 180 fps.

Of course, the objects rendered are specially designed to not need a depth clear every frame, and they are fast to render provided there is no major state change. I was thinking of locking the output at 24 or 30 fps and then giving the time saved to other functions. Once the overall system is done, then I will make it available for download.

It is hard to explain, but with motion blur, the system gets extra “free” frames since the display overhead is greatly reduced, thus allowing for faster (internal) frame creation.

I see your point about the eye being static. But the user will be looking through someone else’s eyes, so it “should” be tolerable. Its like a photograph, motion blur can make it look more dynamic, but it cannot auto-refocus itself if you decide to examine the blurred parts (its like looking at a fast moving tenis ball. If the eye tracks it, it is in focus, otherwise it is blurred). My system would need to know where the user is looking at all times, or simply use brute force rendering to run at 60+ fps (which will result in no spare cpu cycles for other tasks).

yeah id have to agree with the guy who said 24 FPS is not that good a frame rate.i really think it is better to have it running at about a engine im busy working on i make sure if the hardware is not there to run 32-40 fps then it wont run.kinda silly if u want to sell your product but its kinda cool if you want your game running smooth at all times(ie: detect cpu < 1000 ram < 512 < card < the card … dont run sumthing like that ).also what ive made is a system cache program that when the game loads up it will ask the user if he/she wants to cache the current programs that are in memory to the drive ( kinda like a hibernation thing ) so it will free up more memory for the game.still haveing probs with it but it seems to work quite well for now