I’m trying to implement some GPGPU (General Purpose GPU) calculations in OpenGL ES 2.0 on the iPhone 3GS.
Ideally you would use a floating point texture as data arrays and set it up so that the texel/fragment ratio would be 1:1. The calculation that is to be performed on the data is done in the fragment shader. But unless I’m misstaken there is no support for that on the iPhone. Or is there a way of storing floats in a texture?
Or is there some other way that I could pass data so that there would be one data array element per fragment.
I have a couple of ideas that I might try. And I would be very grateful for any comments on wheather they seem doable or not and if it can be done how.
One idea that I have Is there any way of accessing the values of a GL_DEPTH_COMPONENT renderbufferbuffer from the shader? And is it possible to render to a GL_DEPTH_COMPONENT?
The other idea is to make a texture with GL_LUMINANCE format. As i understand, if you store value L in a texel, the value is the fetched as color=(L,L,L,1). My idea would then be to use color.x for my calculation. But I would need to have a conversion function to get it to the value I want. Would this be possible? How many bits would i have in each texel?
I would be very grateful for any help of any kind