After reading this ( http://www.daniele.ch/school/30vs60/30vs60_1.html ) article, I was wondering if running an animation at greater then 24 frames per second is neccasary?
Assume that I am running on dedicated hardware and I can get an interrupt every 1/24th of a second.
Further assume that I can generate a motion blured image in less then 1/200th of a second (fully rendered in the frame buffer). Thus, there is no penalty to make a frame motion blurred or not.
So would this be enough to make the motion look smooth, and thus save processing time if my machine could generate 200 fps using brute force rendering?
Basically, it is like having a graphics engine generate DVD quality frames (with all the fancy effects such “correct” motion blur). Perhaps it would be just as fruitful to generate “better” frames rather than “faster” frames.
In games, motion blur would cause the game to behave erratically. An example would be playing a game like Quake II, if there was motion blur used, there would be problems calculating the exact position of an object, so it would be really tough to hit something with your weapon. With motion blur in a game, the object in question would not really exist in any of the places where the “blur” is positioned. So we have perfectly drawn frames, so objects are always able to be calculated in set places in space. So how do you simulate motion blur in a video game? Easy, have games go at over 60 fps!
In the document I referenced, I disagree that blurring a frame makes it hard to position objects. A simple solution to this is to run the game at 240 (discrete)fps and have the graphics run at 24 fps. Thus each frame is composed of 10 “logical/physical” frames. (Again, hypothetical system so there is no Operating System issues such as multitasking.)