Edge removal (pixels) on GPU?

I am interested in speeding up edge removal. Work is done on a single colour image, that initially is 1 bpp (it’s for the printing industry). For easier addressing I am converting it to 8 bpp, do the edge removal or addition (thickening), then convert back to 1 bpp. The conversions are fast, because I used SSE2 assembly, but the actual edge computing is slow, requiring 1+8 samples for each pixel and I am also running several passes.
Could a GPU approach be faster? I am unsure if the single colour 8 bpp format (like GL_LUMINANCE) is supported for off-screen rendering and how efficient are the GPU multiple pipes with a linear algorithm.
The images are large, can have sizes upto 20k by 15k, so I would need to use tiling, probably 2k by 2k, but I wouldn’t mind the small errors at the tile borders.
I would also like to know if edge removal can be made ‘directional’, that is to remove pixels where they are more available (and to add pixels where there’s more space available), because skeletons don’t print well (are prone to transfer loss), and adding edge pixels may result in overlap.
Any competent advice would be appreciated. Thanks.

what’s the question again?

Could you provide a link to edge removal (pixels) on GPU source code or article?

Sorry, we, non-printing-industry-savvy guys, don’t know what is “the edge removal or addition (thickening)” ?
Can you provide screenshots, whatever, to make yourself clear ?

Here are the links to an original image, edge pixels removed, and edge pixels added:

Edit: changed from TIFF to PNG

Here are the links to an original image, edge pixels removed, and edge pixels added:
Hey, FYI: it’s impolite to link people to oddball fileformats like “.tif”. Instead, use standard file formats. JPEG, PNG, or even a GIF. But TIFF is not acceptable; trying to view it evoked the Quicktime plugin which immediately crashed FireFox.

In any case, glslang can do what you’re looking for. But since it’s going to be a round-trip (upload image, do operation, download image), and it’ll probably require repeated render-to-texture calls to make it work, I doubt you’ll see any significant performance gain over a multithreaded CPU solution.

These operations are called minimum/maximum in Photoshop, and dilate/erode in The Gimp :

For 1 bit images, I would tackle this as a blur + threshold (to the white or to the black), pretty easy and fast to do with GLSL.

EDIT: what korval said. But tiff images are not so bad, just configure properly your browser, Firefox 2 with ubuntu here and its fine :slight_smile:

Sorry about the TIFF issue, it’s widely used in the printing industry and the software I write.

I know about the minimum / maximum in Photoshop, but the filter used in the samples is a bit different. A pixel is changed if at least two neighbours are the opposite colour, which results in smoother shapes, less chances for overlapping, and noise isn’t amplified. However, the ‘joints’ are left with extra pixels, which doesn’t happen with one neighbour pixel of opposite colour.

The idea of blur + threshold is good, thank you, I’ll try it. I’m not sure if it will allow enough control though (the actual job doesn’t uniformly dilate / erode).

I did not see any example about edge removal on GPU but on this site “http://www.gpgpu.org/developer/” in tutorial 0 “Hello GPU” you can see an example for edge detection filter implemented by Cg. You can easily change the fragment program to use blurring equations.

However, the ‘joints’ are left with extra pixels, which doesn’t happen with one neighbour pixel of opposite colour
yeah i can see that happening in the image
ild just do 2 filters one for each pixel if its white see if any of the surrounding 8 pixels are black if so make it black, + the opposite for the shrink (dont know if itll bugger the joints though)
should run very fast on the gpu