Sometimes we need to build interlaced animations for NTSC or PAL output. We have to draw one field, ReadPixels() and send the pixels to a video board, once per field.
We have two approaches: to read the whole frame and to send alternate lines (reading more lines than needed) or read every needed line, issueing a lot of one-line ReadPixels() commands.
We have found that every approach works better depending on the hardware and its driver version.
Because this happens in real time, every millisecond does matter. We miss a way to say to ReadPixels()that it must read alternate lines, not the whole rectangle. So optimizations could be performed by the driver, avoiding the overhead or reading as much as twice the needed information or hundred of function calls.
um… in don’t know what kind of hardware you have but my GFFX 5600 does that just fine.
I reguarly hook it up to a widescreen tv and watch anime and it’s a pretty nice quality.
one way of solving this is to render two fields.
One on the top of the screen and one in the bottom with just a tiny bit of offset(about one pixel).
It should produce the same effect
Originally posted by kcmanuel:
We miss a way to say to ReadPixels()that it must read alternate lines, not the whole rectangle.
There are a bunch of video-format ReadPixels/DrawPixels extensions. Some of them support interlaced transfers, some support YCrCb component packing and color space conversion.
The GL_OML_interlace extension is probably the most up-to-date. It was defined by the Khronos Group as part of OpenML 1.0 (OpenML is a set of APIs for handling video data, including a handful of OpenGL extensions). The GL_INGR_interlace_read extension has similar functionality. Apple may have one that isn’t in the extensions registry yet, I’m not sure.
Checking Delphi3d’s database, it looks like 3Dlabs Wildcat cards support the INGR and OML extensions. Not sure what other vendors do.