Run Length Encoding implementation on GPU?

I want to use RLE to compress an image in GPU. Does anyone know how to implement a GPU RLE algorithm using GLSL? Any references?thx:)

I want to use RLE to compress an image in GPU.

Why? GPUs only excel at tasks that are very parallelizable. And RLE is not a very parallelizable algorithm. You will likely get nothing out of putting this on the GPU.

There is a very small paper about parallel run length encoding using cuda. The paper suggest to use parallel reduction for summing. But if I read well the speedup given in the paper, it is only 4 times faster on gpu with variable length encoding using shared memory. Using global memory, the performance is worst on GPU.

thanks a lot. I just accelerate the RLE process and read back a compressed image from GPU.

thank you, I will consider the CUDA solution.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.