Using convolution filters


i have a texture which i generate on the fly and now i want to blur that texture.
I found out, that convolution filters might be what i need.
As far as i know, convolution filters are implemented through the ARB_imaging extension. However i couldn´t find a spec about it anywhere. Some spec mentioned, that it was moved into the core, but i could still not find any source, which explained how to use convolution filters in OpenGL.

Can someone show me, how to use a convolution filter in OpenGL?

Thanks in advance,

If you by ‘generate on the fly’ mean that you create it on the graphiccard then the convolution function wont work since its only affects pixels when you upload things to the card.

Uh, i tried glReadPixels and glDrawPixels.
I get a framerate of 20, which is already quite low.
But when i glEnable (GL_CONVOLUTION_2D), i get one frame in 10 seconds! (At a resolution of 640*480)

I am thinking of doing it on the CPU. However now i need to know how a convolution filter gets applied.
Can anyone tell me how the algorithm works?


Do you mean convolution filter as in HDR shop, which computes diffuse and specular contributions per pixel from all directions, or just a blur?

If you want blur in hardware, you need to do a sum of renders at successively smaller resolutions.

It would be nice to have proper blurs as a hardware feature sometime in the future, but I doubt whether you’ll get the equivelent of what HDR shop does in realtime anytime soon.

I don´t know what HDR shop is. I should just describe what i want to achieve. Maybe their is another solution:

My engine supports “glowing” objects. Every surface can have a glow-texture. A white texel means that the texture will glow at that position.
Now i would like to have a slight flare-effect around every glowing object. Billboarded flares cannot be used for this.

The first way i tried to achieve the effect was to render all glowing objects into a texture, create mipmaps for that texture, and than add that texture to the final image. By using GL_TEXTURE_LOD_BIAS_EXT to choose a smaller mipmap-level, i could get a blur.
However that blur was not satisfying.

Therefore i wanted to use a convolution filter, because it should yield to a much better quality. However OpenGL´s convolution filter is damned slow (on a Gf 4200). So maybe doing it on the CPU might be faster.

Or do you now of another way to achieve this flare-effect?

BTW: I got the idea to use a conv. filter from this page:…ained.htm#Flare


Like this?

Bottom one.

looks like Convolution on NVidia boards is software implemented - and, as a result, performance is very pure.
I don’t know about ATI, but looks like 3DLabs Wildcat(not a P10) and SGI workstation(O2, Octane) have a hardware implementation of convolution filter.

the fastest way is a multipass technique on the graphicscard…

i think nVidia has that technique descriped on their page.

For gf4 you can sample 4 points in one pass, so 2 or 4 passes will give you a nice blur, and its probably faster than you think.

@Mazy: Could you explain how i can blur an image with a multipass technique?

Do you mean i should bind my original image to all 4 texture-units, shift the texcoords a bit and add/multiply/whatever the 4 images?


With ARB/NV_fragment_program it is possible to implement in one pass. kernel 3x3 can work with speed up to 40MPix/s on nv35.

Jan2000: yes, thats what i mean.

@ayaromenok: Unfortunataly i´m working on a Geforce 4, so no fragment programs to play with

I´ll try Mazy´s suggestion.