I have a RGB float texture, where each pixel represents a particle. RGB=XYZ
I am manipulating the texture with Cg programs, to simulate movement, and that works fine.
The problem is that when I am done manipulating, and I would like to display I need to download the texture (pbuffer) to the cpu, and thereafter send it as pixels in a for loop.
This works, but I would very much like to do it in another way.
I am told that it should be possible to cast the texture in video memory as a vertexarray and pass it to a “glRenderArray(GL_PIXELS)” or somthin.
I too am very interested in an answer to this!
In the meantime: what kind of performance benefit are you seeing? I.e. leaving out the code that copies the pbuffer to system memory how much faster can you generate say 512x512 particles in the shader versus simply on the CPU? (I geuss this is only meaningfull if you tell me what kind of graphics & CPU you have)
I am building an application where I hold physical values in 5 pbuffers (3 64X64 textures & 2 255X255 textures). I guess that I am running ~9 renderings between the 64X64 textures and a single rendering in the 255X255 texture.
I get ~14fps. And when I omit the particles the fps go to ~22.
I use a dual P3-1GHz w. Geforce FX 5700