One of my shaders puts the particle positions(x,y,z,w) into textureid[0]. Now I need to calculate for each position, the distance from the eye. But the distance calculated is a single value as opposed to the x,y,z,w of the particle position. Currently what I am doing is that I write the same value (distance calculated) for the x,y,z,w into another texture.
Is there a way I can reduce these 4 values to a single value on the gpu itself ?
I am outputting the distance to a texture. Say for a position (x,y,z,w), I am writing value (d,d,d,d) to another texture. I want to reduce this texture by 1/4th so that I can have a single value d for the corresponding x,y,z,w value in the first texture. The problem here is that, the textures attached to a framebuffer object have to be of the same size.
What I want, is to get a texture of size widthheight from a texture of size widthheight*4 using shaders.
Another way to interpret the problem.
I have a texture of size widthheight in format GL_RGBA, which I use to store vertex data (x,y,z,w) for each vertex. I have widthheight number of vertices. Now I want to create a buffer which will store the x value of each vertex from the texture.
I want this to be done on the gpu only.