Blending AA + transparency AA - possible on 7800?

I’d like to know if it’s possible to use both types of AA - coverage-blending AND supersampling TAA, in order to increase the quality of my line and quad strip meshes. I know blending and MSAA don’t work together, but years ago I had blending and SSAA work well together on GeForce2Ti (with tweaks).
Right now I have a 6600GT, which probably doesn’t support TAA (it may have the capability in hardware, but not in the drivers), but I plan to upgrade to a 7800GT in the next months.
Thank you.

Originally posted by Tzupy:
I know blending and MSAA don’t work together
Uhm, that should work just fine, or what exactly are you talking about? Are you referring to that nVidia only support blending but not MSAA on FP16 or something like that?

I’m talking about using coverage-blending AA and MSAA (enabled 4x from the global driver settings). On line strips it looks worse than each AA separately. And the nVidia dox specify that they don’t work together (blending & MSAA). But since SSAA does work with blending AA, and the 7800 can use SSAA for the new TAA, I would expect the desired improvement.
Now, if someone knows how to enable SSAA on a 6600GT, please do tell me, and I won’t need a 7800 anymore (except for the upcoming ES4: Oblivion :wink: ).

I’m a bit confused.

So called “transparency supersampling” AA (which is basically supersampling for alpha tested geometry) isn’t exposed in OpenGL. It is only supported on GeForce 7800 hardware.

“Transparency multisampling” is just another name for the alpha-to-coverage mode in the ARB multisample extension (and works on all hardware going back to GeForce3):
http://www.nvidia.com/dev_content/nvopenglspecs/GL_ARB_multisample.txt

Neither of these will help with line antialiasing.

We no longer support supersampling AA in the control panel. If you want this, we recommend you implement it yourself in your application by rendering to a larger buffer and then downsampling with a shader. This has the disadvantage that (unlike multisamping) the shader gets executed for every pixel in the large buffer.

MSAA and blending certainly do work together.

Coverage based antialiasing (i.e. GL_LINE_SMOOTH, GL_POLYGON_SMOOTH) is compataible with MSAA, but has the disadvantage that it requires blending and is therefore order-dependent.

Does this answer your question?

Thank you for the answer, it is very informative.
I was under the impression that coverage-blending AA and MSAA don’t work together, because in the multisample specification
it is written: ‘lines are rasterized using the following algorithm, regardless of whether line antialiasing (LINE_SMOOTH) is enabled
or disabled’, and the same for polygons.
Since you say that they do work together, then something must be wrong with my code, because for me they don’t seem to work.
The worst is with 2x MSAA together with coverage-blending AA, there are missing points in the lines, looks awful. Actually, just
2x MSAA looks better than that. With 8x MSAA and no coverage-blending AA, it looks as good as coverage-blending AA alone.
I would have expected adding MSAA to improve the quality, not to worsen it. I only used MSAA enabled from the control panel,
didn’t request a MS rendering context, since I suppose it wouldn’t change the result, would it? I tried with or without depth buffer,
didn’t make any difference. Is it something I should check about multisampling params or functions?
The problem with rendering to a larger buffer and then downsampling is the size of such a buffer. AFAIK the maximum size is
4k x 4k on nVidia, but only 2k x 2k on Ati. And to achieve 4x supersampling I would need at least 3,200 x 2,400 for a screen
resolution of 1,600 x 1,200, which is preferred. I currently recommend using nVidia cards, but I’d rather not rule out Ati.
I know that I could render to tiles of the maximum supported texture size, but even with the clamp-to-edge setting, the edges are
still a bit visible. I know because I’m using this method to display very large bitmaps. And for vector graphics the visible edges
would be more annoying than for raster graphics.
If I could get 16x MSAA I could drop the coverage-blending AA and I would get rid of the front-to-back ordering for polygons.
But in the control panel there is no 16x MSAA available (81.85 drivers). If I would add a second 6600GT would I have the
choice of 16x MSAA (I have an Asus A8N-SLI Dlx mobo)?

I browsed the renderbuffer specification, hoping that I’ll find the maximum sizes to be over 4k. But didn’t find explicit maximums, only MAX_RENDERBUFFER_SIZE_EXT ( 0x84E8 ). And when I queried it (for my 6600GT) with glGetIntegerv, it returned 1000! :confused:

Originally posted by Tzupy:
only 2k x 2k on Ati
The X1K series support 4k x 4k.

Originally posted by simongreen:
So called “transparency supersampling” AA (which is basically supersampling for alpha tested geometry) isn’t exposed in OpenGL. It is only supported on GeForce 7800 hardware.
Adaptive antialiasing, which is essentially the same thing, is supported on ATI cards.

Adaptive antialiasing, which is essentially the same thing, is supported on ATI cards.
Just curious…

Which cards support this?

Is multisampling separate from this?

I’m guessing since you said its ‘adaptive’ you’re just doing something like stochastically sampling the pixel and accumulating the shading results until the change in the accumulated value drops below some epsilon? Is there any way to set the epsilon?

@Humus:
Not related to the title, but do the Radeon X1K series support smooth (coverage-blending) polygon AA? Tests I ran 3 years ago on a 9700 showed it didn’t support smooth AA.

Originally posted by Stephen_H:
[b]Which cards support this?

Is multisampling separate from this?

I’m guessing since you said its ‘adaptive’ you’re just doing something like stochastically sampling the pixel and accumulating the shading results until the change in the accumulated value drops below some epsilon? Is there any way to set the epsilon?[/b]
It’s supported on the X1K series. You can enable it on older cards though with a registry hack.

Basically, it’s multisampling as usual, but when alpha test is enabled it’s supersampled. It’s all handled by the driver, and it has to be enabled in the control panel. It’s not app controlable at this point.

Originally posted by Tzupy:
@Humus:
Not related to the title, but do the Radeon X1K series support smooth (coverage-blending) polygon AA? Tests I ran 3 years ago on a 9700 showed it didn’t support smooth AA.

It is my understanding that this is supported on R300 and up, but actually I don’t know. I have never tried using any of those features myself as I find multisampling solves all my AA needs.