Making an AVI movie of an animation

A couple of others have asked how one can make an AVI movie from an OpenGL animation. One response was to repeatedly capture the framebuffer to a set of .TGA files and then feed to .TGA files into a program called BMPtoAVI.

The problem I’m having is that capturing the framebuffer with glReadPixels() takes about 0.3 secs when I have a 500x500 pixel viewport and almost 1 second when I have a full-screen viewport. Writing the image file takes roughly a 1 second per image.

Does anyone else experience similar problems?

Can somone suggest a solution to read and write images fast?

Is there a way to directly feed the framebuffer pixels to an AVI file without first writing them to a series of image files?

Depending on which graphics board you use, newer drivers could increase the glReadPixels preformance. (e.g. NVIDIA Detonator 3, AKA 6.18)

In this post
I described how to read images from an AVI file. With AVIStreamWrite and AVIStreamWriteData the opposite direction should be possible too.

There is an AVIview program in the MSDN which shows reading and writing of AVI files.
Haven’t looked at it, yet.

Thx, but the idea is that the code should be platform independent.

It should run on Sgi, Linux and W32…

Originally posted by Uther_Pendragon:
Thx, but the idea is that the code should be platform independent.

It should run on Sgi, Linux and W32…

MS provides you with their multimedial API, I was looking for something platform independent, and I 've found a link to “ImageMagick”, the guy who post the msg was claiming that one could use the library (its
platform independent) to create avi, mpeg…

but I couldn’t find any examples, or docs on that…

ImageMagic can be usfull indeed - thx.

But there still the issue of how to get the rendered data to frames/images without ever diplaying it.
(Platform independent ofcourse).

I did something like this a while back. Rather than calling glReadPixels() for every frame, I added a “demo recording mode” to my 3D engine.

In this mode, the engine restricts itself to a fixed 20 fps. You navigate through the scene as usual, and the positions and orientations of all objects are written to a file.

Afterwards, you can load the recorded demo file, and the engine will play it back offline and call glReadPixels() to store the frames. Hence, the abominable performance of the framebuffer reads is not an issue.

  • Tom


I’ll tryout something like this…


If you want to render without displaying, you have to render to an offscreen bitmap. This is OS dependent, but only until you selected your render context to this bitmap. We made different sourcecodes for Windows and UNIX here, but in the same file with #ifdef UNIX.

Yeb, that’s the only option I’m afraid.

Originally posted by Uther_Pendragon:
Yeb, that’s the only option I’m afraid.

you don’t have to render into “platform dependent bitmap” - you can simply create your own structure
to store everything you would like per each frame…

as well if you are interensting in “recording mode” - (depends what you are going to record, if the enviroment
is not generated randomly or the objects are not “driven” by some AI algorithm), you don’t need to record
the “screenshots” during the “game”, you can record only your trajectory, and after finishing the recording - the saved trajectory
will be used to render frames of your video…

offscreen render has some advantages:
btw, you can run more passes and add “some features”,
then post the result (the avi file) and put there some commments
that it runs at least 90fps, some people will get frustrated