MSAA with Deferred shading.

Hi all. I am considering multi light system implementation. Currently my point of interest is deferred lightning and light pre pass ( outlined in ShaderX 7 book) .Now because the work is done using user defined FBO the automatic MSAA doesn’t work. Also I learnt from " OpenGL 4.0 Shading Language Cookbook " that doing multisampling on textures in DR won’t work. For my app MSAA is of highest priority. Anyone knows if light pre pass technique allows texture multisampling? Or should I in such a case abandone these approaches alltogether and use a regular forward rendering? Will I be in such a case limited with number of light sources? I am using GL 3.3

It’s not true. Since OpenGL 3.x you can use multisampled textures to implement a deferred renderer that uses MSAA. The only thing that you have to be careful with is that you want to perform the multisample resolve only after lighting has been applied, otherwise artifacts may appear. Also, the bigger problem with deferred rendering and MSAA is that it requires a huge amount of memory and bandwidth as both techniques are heavy users of these resources.

Is it really or just anti-aliasing is important? There are other techniques to do anti-aliasing than MSAA. Actually, most games using a deferred renderer tend to use one of the post-processing based anti-aliasing techniques like MLAA.

Well, more or less, light pre pass has similar issues with MSAA, but again, it’s possible, if memory and bandwidth requirements are not prohibitive.

If MSAA + deferred is too memory/bandwidth intensive for your use case and alternative anti-aliasing methods like MLAA are not good enough for you, then I guess you probably should use a forward renderer, there is no limit on the number of light sources there either (just an additional light is more expensive there than with deferred rendering).
However, be warned that a forward renderer is always more difficult to make efficient and the number of lights will have a heavy impact on the performance of your renderer. Not to mention that you are less likely to be able to batch rendering commands together thus you’ll be more likely CPU bound.

post-processing based anti-aliasing

Pedantic note: by definition, you can’t “post-process” anti-aliasing. Those are just smart filtering and blurring techniques, not real anti-aliasing.

To do MSAA with deferred shading, you just need texelFetch(). Current/previous games go for MLAA because they want to run in DX9, where you don’t have access to samples.

Light-prepass (opposed to regular fat G-buffer deferred) with AA is problematic if you do want AA quality. Because when you sample from the light-buffer, on edges you’ll be getting light information from samples that do not belong to the current fragment you’re applying texturing etc on. If your light-buffer is single-sampled, on 4x MSAA, some fragments will get 75% of their light from the wrong place.

One way to fix this a bit (but not completely) is to expand the AA buffer into a bigger single-sampled texture, and do non-centroid linear interp sampling; it will increase quality but not perfectly. Plus, it requires knowing sample-positions and adjusting your expansion to match them, which is tricky.
The true way to fix it, is to … do the shading in another AA texture attachment in the same FBO; so that it uses the same MS depthbuffer. When shading, get the coverage mask, and pick one of the active samples. TexelFetch() once at that sample from the AA light-buf. Later resolve this result, applying tonemapping (and definitely a saturation 0…1) before averaging the multisample results.

I understand your reasoning, but anti-aliasing means removing jagged edges, thus while MLAA and similar techniques may not be considered true traditional anti-aliasing techniques, they do remove jagged edges.

That’s not exactly true. You can use per-sample shading in the light-prepass step and then per-sample fetches in the material step (actually, you can use per-sample shading here too, though it would be an overkill). This way it should be working as expected, I think.

but anti-aliasing means removing jagged edges

No it doesn’t. That’s the conventional wisdom, but conventional wisdom usually isn’t.

Anti-aliasing, just like aliasing, is a signal processing term. The term “aliasing” refers to an error in analog-to-digital signal conversion, wherein the resulting digital signal seems to represent a different analog signal. It “aliases” itself as another signal. It’s an inherent issue with any analog-to-digital conversion system, whether it’s converting analogy sound-waves into a digital form, or scan-converting analog triangles into a 2D image.

Graphics programmers don’t get to co-opt language and pretend it means something else. Especially when they didn’t invent said language (aliasing predates CG).

Anti-aliasing refers to one of a number of techniques that remove aliasing. This, by definition, requires being part of the analog-to-digital signal conversion system (in the case of rendering, that means part of the rasterization system). Or, to put it another way, you cannot perform antialiasing on just a digital signal. You can only “smooth” it or filter it.

Real antialiasing happens at rasterization time. Even if the “resolve” step happens later (merging multiple samples into one), true antialiasing ultimately happens as part of the signal conversion.

That’s not to say that things like ML"AA" can’t be useful. They’re just not antialiasing; they’re just smart filters.

Thanks, Alfonse! I knew you’ll be even more pendanic. Okay, then MLAA is smoothing, not anti-aliasing. Congratulations!

Real antialiasing happens at rasterization time. Even if the “resolve” step happens later (merging multiple samples into one), true antialiasing ultimately happens as part of the signal conversion.

And these post-processing “smart filters” are applied at the rasterization stage. You probably mixing up ancient raster graphics techniques we used to implement in software with the new shader based ones.

No, it’s applied after the construction of the rendered image. It’s a post-processing effect. It may be applied during the rendering of a quad, but as far as the actual math is concerned, it is being applied after the rasterization of the triangles. MLAA doesn’t change the rasterization at all; it’s all just a post-processing filter.

[QUOTE=aqnuep;1239231]That’s not exactly true. You can use per-sample shading in the light-prepass step and then per-sample fetches in the material step (actually, you can use per-sample shading here too, though it would be an overkill). This way it should be working as expected, I think.
And then it’ll be as wasteful as FSAA, apart from the rotated-grid sample positions :slight_smile: . More bandwidth and computations for exactly the same lighting result. Plus require GL4 hardware instead of the 3.3 the OP is targeting.

First thing’s first: How many lights? If a forward shading pipe can easily handle your light count, you don’t have a bunch of small triangles, and you don’t have a bunch of overdraw with really complex shading, then just throw your scene at forward shading. Deferred techniques have some great advantages, but they demand special handling of not just edge-AA (which you know) but also translucency.

Like Ilian, I’ve done MSAA Deferred Shading. Just want to highlight a few things mentioned above that might not be crystal clear for you that helps with implementing Deferred techniques with MSAA.

You can allocate MS (multisample) render targets and MSAA rasterize to them with no problems. Definitely, do it!

[li]For the subsequent steps, you can do MS -> MS rendering by running the frag shader per-sample – works, but can be expensive. [/li][li]Another easy option you can make use of is MS -> SS (SS = single sample) rendering by running the frag shader per-pixel. Have each frag shader thread read each sample from the MS buffer (via texelFetch), perform its operation on it, average the results over all samples, and then write out its downsampled per-pixel result. Saves you write bandwidth and a lot of size on the output buffer, when you can get away with it. This is easier with Deferred Shading and harder with Deferred Lighting. [/li][li]If you need to (make sure you do first!), you might be able to improve the efficiency of either of these by running a pass to classify the MS buffer and mark which pixels are “edge” pixels (i.e. which require per-sample shading) and those which aren’t (which don’t). Then you can process all samples in each pixel for edge-pixels, and just process the first sample for non-edge pixels. (note: there’s a trick to mark edge pixels while rasterizing versus in a sep pass, but it doesn’t handle intersecting triangles; note also: you might be able to do something like this in the prev step without even having sep passes by looking at your G-buffer data). [/li][/ul]

More advanced techniques mentioned by some graphics gurus take the latter step and ++ it by repacking the samples into adjacent GPU threads for greater GPU efficiency, but this isn’t simple and it’s very likely you’ll find you don’t need to go anywhere near this (or possibly the edge pixel classify optimization either).

If one generates some extra information during rasterization and uses it during smart filtering, wouldn’t that make it legal to call this smart filtering anti-aliasing?