How to make interleave row rendering?

Originally posted by ZbuffeR:
Halve rendering height, as already said.
Thanks for your remind,but half height rendering’s image quality is not satisfied for me,because I must combine field images to frame image,the combination is copy odd & even image line to frame image interleaved,so the frame image is like this:
odd0
even0
odd1
even1

Originally posted by yooyo:
[b] Whatever you do, consider following issues:

  • Your engine MUST have constant 50 or 60 fields per second (depending on PAL or NTSC)
  • TV out on todays gaming cards reports 50 or 60 Hz
  • When your render engine work in frame mode, scanline converter chip on card discard every even or odd row depending on current field status and order.
  • If you want to render in field mode, then you must interleave fields an deliver constant 25 or 30 fps output. Each frame must be sent twice or do carefull timings in app and call SwapBuffers every 20 or 16.666 ms.
  • Finally… when your app start rendering you can’t be sure that it start on even or odd field. So… sometimes user may need to push “swith fields” button (if you app provide such function).

There is a another option … render interleaved frame in offscreen buffer, apply RGB -> YUV422 shader, grab frame and send it to overlay mixer using DirectShow. [/b]
I don’t use display card to play scene in TV,in my app,rendering a scene in offscreen buffer(I use pbuffer with FSAA),and use “glReadPixel()” to readout image,and push image to a video output card.So the scanline converter chip in display card is not useful for me.

OK… try this…
render two half images on two off-screen buffers. Then render final image in full size off-screen buffer and use “masking texture” with size 1x2 pixels (0, 1). Use shader to combine these two half-images in big one. Idea is to stretch both fields 2x vertically and use smart shader to choose which row will be used depending on even or odd Y coordinate in shader.

In pseudocode:

  • render upperfield in fbo0
  • render lofwefiled in fbo1
  • bind fbo0 to texture0 (upperfield in shader)
  • bind fbo1 to texture1 (lowerfield in shader)
  • bind mask_texture to texture2 (mask in shader, enable nearest filtering and turn on repeat)
  • activate final-fbo
  • render full-screen quad using following fragment shader:
sampler2D uppedfield;
sampler2D lowerfield;
sampler2D mask;
/* mask texture is 1x2 and it looks like:
000  or  0
255      1
*/

void main(void)
{
 vec4 uf = texture2D(upperfield, gl_TexCoord[0].xy);
 vec4 lf = texture2D(lowerfield, gl_TexCoord[0].xy);

// because of nearest filtering m will take only 0.0 or 1.0
 float m = texture2D(mask, gl_TexCoord[0].xy).r; 

 gl_FragColor = mix(uf, lf, m);
}
  • grab final-fbo

  • you need to play a bit with texture coordinates because shader work with normalized texture coordinates and it use 2d textures. In case that you use rect textures it is much easier

  • you can use gl_FragCoord instead of mask texture, but it works only on NVidia cards. AFAIK, ATI card have problems with gl_FragCoord

My idea is more along the lines of :

  1. render half-height scene to texture A (odd lines)
  2. render half-height scene to texture B (even lines)
  3. render a quad to frame buffer that merge A and B texture according to texture I (black and white lines).

EDIT : as yooyo said in a much more precise way :slight_smile:

Thanks yooyo and ZbufferR,but I think the final image combined with 2 half-height image is not satisfied for me,because the FSAA quality will be bad,as we know the “FSAA” algorithm need around near pixels,if I render half-height image,then row(N) pixel will operate with row(N-2) and row(N+2) pixel,not row(N-1) and row(N+1) pixel,so the final image’s AA quality will not be good.

It is reason why I can’t use half-height render,do you think it is right?

Nope… you are wrong. First, fields have different timestamp. So your render loop should be:

  • update(time)
  • render upperfield
  • update(time+delta) // delta is 50/1000 or 60/1000ms
  • render lowerfield
  • combine upper and lower fields in fnal image
  • grab and send to video-out card

So… AA should not access to vertical surround pixels because it lead to flickering on interlaced output device, but access to N-2 and N+2 is correct, because pixels belong to same timestamp.

Originally posted by yooyo:
So… AA should not access to vertical surround pixels because it lead to flickering on interlaced output device, but access to N-2 and N+2 is correct, because pixels belong to same timestamp.
He is comparing quality of “render full scale image and construct field using only odd/even lines” versus “render half sized image and use that as field” in which case there is quality difference.

Originally posted by Komat:
[quote]Originally posted by yooyo:
So… AA should not access to vertical surround pixels because it lead to flickering on interlaced output device, but access to N-2 and N+2 is correct, because pixels belong to same timestamp.
He is comparing quality of “render full scale image and construct field using only odd/even lines” versus “render half sized image and use that as field” in which case there is quality difference.
[/QUOTE]So Komat,do you think what I said is right?

Originally posted by pango:
So Komat,do you think what I said is right?
Partially. It is true that there will be quality degradation in the half height render however the explanation was not entirely correct.

The FSAA does not operate between rows of the final image, it has several vertical samples in each row and operates on them. The degradation comes from the fact that if the half scaled image is shown back in full size on the output device, each resulting “pixel” is effectively calculated from half of vertical samples per physical geometry when compared with pixel from image that was rendered in full size.

Additionally there is one thing that operates between consecutive lines of the final image and that will degrade in the half scaled rendering. It is a mipmap level selection. Rendering of half scaled image will likely result in selection of smaller (in texture dimensions) mipmap levels when compared to the full scale image.

@komat:
You are right about mipmaps.

@pango:
Im afraid, but you have to do a stencil way…

Hi,yooyo:
I had do a test that render half-height field image,and then combine the field images into frame image,but the result is not satisfied,the zigzag is obvious.Maybe it is the fault of I mistake what you said,so I post my pseudocode code:
// render odd field

  • setup odd field’s camera projection p1;
  • render scene;
  • copy image to tex1;

// render even field

  • setup even field’s camera projection p2;

  • render scene;

  • copy image to tex2;

  • use a pixel shader to combine tex1 & tex2 into frame image;

Because my program simulate the camera in real world,so “p1” may not equal to “p2” when camera is zoomming or panning,but “p1” also can equal to “p2” when camera is not in action.I found when “p1” equals to “p2”,the result image’s zigzag is especially obvious,but when camera in action,because whole scene is in moving,so the zigzag problem is not clear,but it still exist.

You should activate vsync.

Hum, what about a screenshot, for the still shot ?
If you skip the pixel shader part, and display tex1 then tex2 quickly, do you see a big change ? If no, then the problem is in the pixel shader. Post its code, and how you configure it.

@pango:
Post screenshot please.

Interlaced rendering is used to tweak framerate by cutting down vertical resolution. So… in case of PAL standard you have 25 interleaved frames per second or 50 fields per second. all fields have different timestamp. Time offset between fields are 20 ms. This mean first field should have scene rendered at time 0ms, second field should have scene rendered at time 20ms, and so on (40, 60, 80, …). Every two fields interleave and “composed” image send to video out card.

When I say “scene rendered at N ms” this mean everything should be animated, even viewer camera.

Unlike PAL (1000/25 ms), timeoffset between fields on NTSC standard is (1000/29.97) ms.

Zig-zag effect might be related with wrong field order. Examine your output video device and check is it work in Top Field First mode or in Bottom Field First mode (TFF or BFF).

Interlaced motion video cameras do not produce a frame image. They produce a sequence of even and odd fields. There is no reason to render the combined fields into a frame image. Load each field separately into the output device or combine them in the CPU.

If rendering is both synchronous to the display device and aware of which field is next to be displayed (even or odd) then only one field (even or odd) needs to be rendered with each time step. Apparently your display system does not inform you which field is displayed so you have to compute both.

The even and odd field camera projections p1 and p2 should not be equal, even for a static scene. If they were then the even and odd lines would be identical. The odd lines(rows) should be offset vertically by a half line(row) from the even. This should be done with a skew transform.

Originally posted by yooyo:
[b] @pango:
Post screenshot please.

Interlaced rendering is used to tweak framerate by cutting down vertical resolution. So… in case of PAL standard you have 25 interleaved frames per second or 50 fields per second. all fields have different timestamp. Time offset between fields are 20 ms. This mean first field should have scene rendered at time 0ms, second field should have scene rendered at time 20ms, and so on (40, 60, 80, …). Every two fields interleave and “composed” image send to video out card.

When I say “scene rendered at N ms” this mean everything should be animated, even viewer camera.

Unlike PAL (1000/25 ms), timeoffset between fields on NTSC standard is (1000/29.97) ms.

Zig-zag effect might be related with wrong field order. Examine your output video device and check is it work in Top Field First mode or in Bottom Field First mode (TFF or BFF). [/b]
Thanks Yooyo,but my program’s camera not animated in every field time.My program simulate the camera in studio,it simulate camera’s pan & zoom,but you know the camera is not still in panning or zooming,so if pan or zoom angle is not changed from previous time,odd & even field’s camera params should be the same,the image rendered should be same as frame rendering.

The field-order in my app is right,and I also test diffrent field rendering order(render odd in top-half,even in bottom-half,and render even in top-half,odd in bottom-half);

How to post my screenshot in this forum?

Originally posted by pango:
Because my program simulate the camera in real world,so “p1” may not equal to “p2” when camera is zoomming or panning,but “p1” also can equal to “p2” when camera is not in action.I found when “p1” equals to “p2”,the result image’s zigzag is especially obvious,but when camera in action,because whole scene is in moving,so the zigzag problem is not clear,but it still exist.
You will have to define more clearly what you call “zigzag”.
To insert an image you will have to upload it somewhere on the web and use a link inside your message. Like this:
:smiley:
If p1 and p2 are identical and there is no movement in your scene your interleaved frame should have a half hight resolution because the two fields are identical, no zigzagging as I understand the word would be visible.
But even if the camera is not moving, zooming, panning etc. p1 and p2 are never the same. Think of fields as full frames with alternating lines of black or not displayed. The camera needs to be offset for one of the fields.