Hi! I’m currently implementing a deferred shading scheme and have to tackle the lack of hardware anti-aliasing. To perform an intelligent blur on the final image, would there be much difference between using depth and normal for edge detection as compared to simply running an edge detection filter on the color values? Would there be much difference in what edges are/aren’t detected? I would be happy if someone who’s already done this could share his/her experiences.
I have only implemented edge-detection using depth/normals. I haven’t done anti-aliasing with it (although i know that it can be done).
I would say, using depth/normals should yield better results, since there are many edges you can’t detect, when just doing edge-detection on the color-values.
On the other hand, if there is not much difference in color, then missing anti-aliasing might not be very noticable.
I would say, start doing the easier thing (color edge-detection), then implement your anti-aliasing on that basis and check the results. If you are not satisfied, add another edge-detection shader, which uses depth/normals and compare the results and the performance impact.
Since you are doing deferred shading all data is there already, adding the edge-detection should not be much work.
However, if it helps you: Stalker uses depth/normal edge-detection for their anti-aliasing. So, maybe they found it to be better.
However, to be honest, the edge-detection via depth/normals, which i got to work, didn’t satisfy me really. Especially the depth-edge-detection was very tricky and had a few problems. So, if the edge-detection doesn’t work properly, your anti-aliasing won’t be perfect, neither. But maybe i did something wrong there.
Thanks for your reply. I will try both methods and see which one I end up using.
I implemented edge detection in a cartoon rendering demo some time ago, and used both colors and normals: http://www.delphi3d.net/download/npr_toon.zip
If you want to check depths as well, I think you will want to use linear Z values.