Blurring objects


I am trying to add a blur effect on 3D objects as a 2D image processing effect after the objects are already rendered to a buffer. Since I want to be able to blur individual objects differently or none at all, I first render a mask of each object to a texture (each object is rendered using a fixed colour based on the blur setting (0-32, which is the size of the kernel), no lighting of course). Then after the ‘normal’ render is finished, blur is added to pixels with non-zero masks using a blur kernel size based on the mask value.

As far as I know, this is similar to how depth of field effects work. However, this is far from ideal, since I need to not only blur the actual pixels of the objects, but their neighbourhoods as well (that is why a blurred object always looks ‘bigger’ then its original). Somehow I have to render the mask of the objects with a bigger outline, at least as big as half the size of the filter kernel.

My question is: how can I do that? I tried to render the masks in a second pass, this time as lines instead of polygons and using the line width setting to make the outline bigger. But the maximum line width is 10, which is not enough for bigger kernels, and my objects do not look very good at the edges, probably because of this.

Is there any other way?


  1. You can make the objects be drawn bigger than they actually are when drawing to the mask buffer. For example, changing their local scaling before transformations or adjusting it after, in clip space (what gives you better control).

  2. straightforward solution: blur the mask before using it


Solution 1 looks a lot more appealing, it does not need an extra pass. But are you sure I would get the same result? Although, solution 2 can be done at the same time I do the blur, except then I don’t think I can separate the kernel into two 1Ds.

I thought about this and now I am not sure if I can use either of your suggestions.

  1. This could only work if I scaled every primitive using its centre as the origin for the transformation, or not even then, I am not sure. Otherwise, how can I avoid my objects moving in some direction if I make them bigger by transforming its vertices? The other problem is the amount of scaling I would need.

  2. This would work although i would reduce the amount of blur at the edges of the objects, since the amount of blur applied to the object itself would be averaged by blurring the mask. In any case, blurring the mask would add two render passes by separating the blur.

The 1) is less universal, so not acceptable if you are building a framework for many users.

The 2) is not so bad. The mask buffer can be of a lower resolution (2x or even 4x), so 2 additional passes on it would cost you more quite a bit.
Don’t forget about my suggestion to use one FBO (when performing 2pass gaussian blur) with these 2 textures attached to color-0 & color-1, and just switch render target using DrawBuffers() command.


I will think about implementing 2)

Do you think that blitting the mask to a lower resolution texture than accessing it using linear filtering could be a better solution than an actual render pass? In theory, it is the same as applying a box filter.


Why blitting the mask if you can draw to it already having a lower resolution?

Yes, using linear filtering for the mask access is a way to go. You are performing blurring anyway, and averaging kernel sizes doesn’t break anything.

No other solution have come to my mind yet.

I use the mask for another task as well, therefore I cannot render it at lower than quarter size. So either I add another pass to blur the mask (at least 88 or even 1616 texture access per pixel), or blit the mask to a lower resolution texture and use that one for blurring and keep the quarter size mask as well.

Which method do you think is faster? The blit should be easier for the hardware but it involves binding a different FBO.

I see the third way ( if not missing something ):
Do your first blurring pass to a lower resolution texture and the second one to the original texture. This will combine both ping-pong & gaussian blur properties and be faster than suggested ways (no need for another FBO still).

Are we still talking about the mask texture, and not the ‘real’ rendered image?

I would be surprised if binding two FBOs for a blit operation is more expensive than a render to texture pass but I never made a test to compare the two.

I’m talking about the mask texture. There is no additional render pass for my proposal, while it has no blit operation at the same time.

As far as I see, you are now looking for a way to blur your mask in a most efficient way (using constant kernel, I guess), while maintaining it’s resolution to equal source image resolution. I suggest to do it using known 2-pass Gaussian filter, but having a smaller size intermediate texture:

A = source image (size = n
M = mask (size = n)
B = additional buffer (size = n/4)
C = additional buffer (size = n/?)

  1. draw kernel sizes to Mask
  2. M -> B using horisontal Gaussian kernel of size k
  3. B -> M using vertical -//- of size k/4
  4. A -> C using M, horisontal blur
  5. C -> A (or C->screen) using M, vertical blur

You can even make B & C to be the same buffer. All operations may be performed on a single FBO.

It is possible that there is something I did not know about FBOs. Don’t you have to attach textures of the same size as color attachments of an FBO?

Can I attach a texture (size=n) to GL_COLOR_ATTACHMENT0 and another (size=n/4) to GL_COLOR_ATTACHMENT1?

You are really using only one attachment when operating on (size != n) buffers as you draw to a single at a time and don’t use depth/stencil.
Hence, it would be OK I guess. Anyway, you can use an additional FBO as a fallback :wink:

Thanks, I will try what you suggest. Although to make matters even more complicated, I work with interlaced video so I have to interlace two consecutive frames as odd and even fields and blurring across fields would mess up everything.