Edge Detection

Hi,
In making a deferred shader that addresses some of the flaws of a deferred shader I have came across the idea of using a edge based blur filter to help with the lack of AA. This idea is particularly appealing because of the fact that I could also use it to do a motion based blur effect. The idea I have curently is this

  1. lighting and shadows are applyed
  2. a edge detection pass sets a MRT value to 1
  3. the blur pass sets the mrt value to the blur ammoutn / blur radius
  4. values are blurred based on the blur factor, this includes both edge detection and motion blur, and a possible DoF based factor.

My questions are, first is this a valid way to use the blur, and second, can anyone point me to a know fast and working method of doing the blur passes?

Thanks

Ok, I have a laplacian filter. The only issue now is that the color of the edges are not gray scaled. So how would I do this?

One way that sticks out is simply adding the rgb values up and if they are above 0.0 then setting the value… While this would work im woundering if there is a better way.

There is a article exactly on this topic in GPU Gems 2 which now is freely aviable on the nivida developers page
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter09.html

Thank you for the post, the article is great. And seems like it works much better then my simple laplacian filter. Im working on implementing it, I’ll keep you guys posted :stuck_out_tongue:

Your ideas are reasonable. For a fast blur pass, search for “separable gaussian blur”. If i remember correctly, there is a chapter in GPU Gems 1 (should be available online, too), which talks about how blur is done in “Tron 2.0”, for bloom etc.

In GPU Gems 3 there is a chapter about how Stalker does it’s rendering, it is very enlightening. They also talk about how they do anti-aliasing. It’s the same, that you want to do: edge-detection and then blur it. I did try that approach myself, but have found that results are far from what one would desire. Still, try it out yourself, it is worth fiddling with.

Hope that helps.
Jan.

I finaly got time to sit down with the code and try and convert it. And am left to wounder what they are doing with the follow.

float4 tc5: TEXCOORD5; // Left / Right

float4 tc6: TEXCOORD6; // Top / Bottom

I would think they would be used as the left coord (-1,0), then changed to be right (1,0). Yet they seem to do the following ’ float4 tc5r = I.tc5.wzyx;’ claming it gets the opposite coords… How does this work? And what should I pass in for these?

For best results you may need to fiddle with the depth scaling factor. The technique is resolution independent, but the scale of the scene indirectly weights the computed depth gradient, which can then throw the final normal/depth selection and subsequent average out of whack…

I had figured I would need to do that, and rather then pass in several texcoord I simply passed in one with the width and height to find offsets. But I am still left to wounder what the ‘left / right’ and ‘top / bottom’ ones should be. I am now also wooundering why they take the dot product, and how storing them in the xyzw works right…