If I had to guess, the hybrid FSAA modes (4XS, 6XS, 8XS) on Geforcen look like they’re implemented by rendering into higher-rez multisample buffers and downsampling. 4XS looks like 2 2xMS pixels stacked vertically, 6XS looks like 3 2xMS pixels stacked horizontally, and 8XS looks like 2 4XS pixels stacked horizontally. If this is true, then I suppose there isn’t a particularly good way of representing this in OpenGL because back and front buffers would be different sizes.

I suppose you could render into a pbuffer and use register combiners and a full-screen quad to filter down from super-rez to normal-rez. Alternatively, you may even use copypixels and arb_imaging.

I guess the question boils down to: how do you efficiently emulate Hybrid FSAA in OpenGL?



Technically you would do this by rendering your scene multiple times from slightly different positions and use accumulation buffer to combine them into a single, antialiased image. That’s slow, because accumulation buffer is typically not supported on hardware.

If you’re going to draw a larger image into a pbuffer, you might want to render in tiles to avoid huge pbuffers. That way you don’t even necessarily need a pbuffer.

Rendering in bigger size and downsampling is in general a quite uneconomic antialiasing method, because the sample points are badly arranged. If you’re doing it yourself you might want to render the big image slightly rotated and then rotate it back when downsampling. That should give you somewhat better sampling.


For supersampling, there is no way to get around generating additional fragments, although performing the downsampling through blending or accumulation (supported on ATI, but not on NVIDIA, I think) is not a great idea since you need to send the scene twice.
Unless you did some kind of jittered motion blur or FOV effect.