Maintaining FPS

I undestand how to make it time based, but what do I do with the keyboard input ?

Hunnh? /:expressionless:

Where’d that come from?

I dunno, what r u doing w/ it?

it shouldn’t need any changing, i don’t think…

Well, you’ll have to track the time a certain key has been pressed during the last frame. It should be obvious that the world can only be transformed before a frame, not inbetween.
That thing cost me quite a time to think me into it… Oh, I still use WM_KEYUP/DOWN message to make that stuff. For my realtime engine I wonder if there arrive any WM_KEYDOWN messages while a frame, since it won’t do a DispatchMessage. I think that is the problem I currently have… I don’t want to use the GetAsyncKeyState things, since I’d have to iterate through all keys every 1/1000 second to do it correctly.
Maybe it would be best to create a hook function for the application, that does not post the WM_KEYDOWN message but send it to the application. Or are they already sent?

Hey, mike, i agree. screw direct input! i’m going w/ WM_KEYDOWN!

seriously, though, i don’t work on my own computer, so i can’t install dx.

i pretty much just put all of the wm_keyx’s in a ll along w/ the time and go through it every frame.

wrote a whole wrapper for it.

nice to see someone else in my boat (for possibly different reasons )

i’ve never missed a key

But what about that multiple-frames-in-pipeline problem?

Originally posted by Succinct:
[b] using time based animation i think would make something happen closer to the actual time than using frame based… especially because time-based animation has a temporal resolution limited only by availiable hardware, whereas frame based is accurate only to 1/fps.

GetClockTicks is accurate to @1000ms, give or take 10 or 25, but QueryPerformanceTimer is accurate to 1/1193182nd of a second on the computer i’m on right now. Frame based animation is accurate only to the next 1/60th of a second.

Actually I use an event calendar for the kinematics simulation that is time based. I use QueryPerformanceCounter to drive the timer so that I can make very small adjustments in my clock whenever needed. The rendering engine is frame based in that it simply takes a picture of the current kinematic state at a constant frame rate.

Well, All the camera and player movement looks great and I checked it on computer stronger than mine and it looks great (and also on slower computers - it doesn’t look great but it’s at least timed correctly )
but the only problem left is the animation interpolation let’s say I have to cycle between frames 1-20 and I want that 1/30 of a second will pass between each frame and ofcourse there should be creation of subframes in order to make the ani smooth.
what thecniques are usually used?

As I understand the question I would just set the frame rate limiter to 30 hz and render each of the frames 1-20. I believe that my understanding of your question is flawed, could you clarify?

inspired by the answers of people here I DIDN’T create a fps limiter but created a TIME BASED engine. so sometimes it renders 180 FPS and sometimes 40 FPS. It’s very easy to implement it on movement and rotation but my problem (which I already kinda solved) was with animating the quake 2 models my engine loads and creating subframes to the model frames in order to make it smooth and also to make it time based.

The solution to animating MD2 models is to
set a “cycle time” for each animation cycle,
and then linearly interpolate between the
two frames that are closest. I e suppose your
cycle is N frames starting at F, and the
duration of the entire cycle is D. Then your
animattion vertex is calculated like so for
time T (0 <= T < D)

int ix = (int)floor((float)NT/D);
float terp = (float)N
(T/D-(float)ix);
float omterp = 1.0-terp;
int ix2 = ix+1; if (ix2 == N) ix2 = 0;

for (int j=0; j<sizeof(outMesh)/sizeof(outMesh[0]); j++)
{
outMesh[j].x = inMeshes[ix+F][j].xterp+
inMeshes[ix2+F][j].x
omterp;
outMesh[j].y = inMeshes[ix+F][j].yterp+
inMeshes[ix2+F][j].y
omterp;
outMesh[j].z = inMeshes[ix+F][j].zterp+
inMeshes[ix2+F][j].z
omterp;
}

You can also get fancier and use better-than-
linear interpolation, such as cubic hermite
or (for you signal processing buffs) some
form of sinc-based location resampler, but
the linear interpolation is typically good
enough; especially for a fast shooter.

well, I didn’t quite understand can you explain that again ? or maybe somebody else?
but 10x anyway!

Originally posted by pleopard:
The rendering engine is frame based in that it simply takes a picture of the current kinematic state at a constant frame rate.

wow, what a nifty idea… so the actual physics is based on time, but the renderer just takes snapshots when the screen needs refreshed… kewl

very good idea

applauds Pleopard

i assume this only works when the rendering is done in a separate thread, so i’m putting my rendering into a separate thread just to implement this. i’m sick of using 100 of my cpu’s time rendering 120 fps…

thx for the insite - succinct

[This message has been edited by Succinct (edited 12-12-2000).]

I think it doesn’t need any multiple threads. The idea is, to parameterize the movement of the the world, so you can give the time and you know where the objects are. Then, before any frame, you get the time, move the objects according to the time, and see this as a snapshot. If you have a constant frame rate, you should also be able to increment the time a bit, so your rendering in the middle of the screen is up to date. Yeah, sounds a bit confusing.

Michael S> I think that would cause a problem for physics/simulations, since this would force them to use a time step equal to the one of the rendering functions. Since (all?) simulations solve differential equations, using a large timestep would cause instability/inaccuracy issues. Using a separate, internal sample rate for the real-time stuff should probably work though…

Oh, well, I don’t really get what you mean.
You mean simulations that can’t be calculated time dependant because they’re so complex that it is an iterative attempt with fixed time steps?