Bitwise AND in GLSL ES?

Hello.

I have two arrays of ‘unsigned char’ data, and I’d like to do bitwise AND operations between them.

If I have arrays of A and B, what I have to do is something like this:
C[i] = A[i] & B[i] ;

Because the arrays are very large and the platform I’m working is a smartphone, I am trying to do the job in GLSL ES.

Here, I encountered two problems.

  1. The texture values retrieved from texture2D(.) are given in float type, instead of ‘unsigned char’, which I want to use. Is there a way to use the unsigned char data as it is in GLSL ES shader ?

  2. There is no bitwise AND operation in GLSL ES, as I’m aware. How could I do ‘AND’ ? Should I implement a function that mimics ‘AND’ manually ?

Unfortunately, there is no any support for integer textures and operations in GLSL ES.

While you could scale the [0,1] values received as a result of a texture fetch to the range 0…255, implementing an operation that would mimic the AND operator without any native integer support would be an overkill and not a good idea from performance point of view.

So I would keep the calculation on the CPU side in your case.

You could implement bitwise AND using glLogicOp(GL_AND) and rendering two quads.

However while this works in ES1, the ES2 API has inexplicably removed glLogicOp without providing any replacement.

I think the reason why it was dropped from ES2 is because no embedded GPU supports logic operations as they don’t support integer manipulation not just in their shader cores but neither in their blending hardware.

The only reason I can think of why it was in ES1 is because at that time most embedded devices did everything in software anyway, thus keeping such a trivial function in ES was okay.

To be honest, I don’t know how well logic ops were supported earlier on desktop hardware but my guess it that they are fully hardware accelerated only since OpenGL 3 capable hardware that came with integer support.

Maybe you should try arekkusu’s idea using glLogicOp(GL_AND) with ES1 and share your experiences with us. I would be interested whether it is really hardware accelerated or just emulated by the CPU.

Thanks for your reply, all.

So, in summary:

Basically, I cannot do native bitwise AND on GLSL ES
and manual implementation may not show good performance.

One possible option is glLogicOp, but it is available in only GLES 1.0, while we need to use GLES 2.0 for GLSL ES.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.