Our goal is to average four neighboring values in a texture
using ATI_text_fragment_shader. The closest we can get
is to average four neighboring texture coordinates (not actual pixels).

Here is our code. Please Help!
#specify offsets of pixels to read
#texture is 256x256, so .00390625 is offset of 1 pixel
#if texture is 1 wide
CONSTANT c0 = {0.00390625,0,0,0};
CONSTANT c1 = {0,0.00390625,0,0};
CONSTANT c2 = {-0.00390625,0,0,0};
CONSTANT c3 = {0,-0.00390625,0,0};

#read in current texture coordinate
PassTexCoord r5, t0.str;

#specify location of pixels to include in average
ADD r1, r5, c0;
ADD r2, r5, c1;
ADD r3, r5, c2;
ADD r4, r5, c3;


#pass the pixel coordinates
PassTexCoord r1, r1.str;
PassTexCoord r2, r2.str;
PassTexCoord r3, r3.str;
PassTexCoord r4, r4.str;

#how do we sample actual texture values?

#average the values
MOV r0, r1;
ADD r0, r0, r2;
ADD r0, r0, r3;
ADD r0.quarter, r0, r4;


#how do we sample actual texture values?

Using the SampleMap instruction. For details see the specification at

Yes, SampleMap would do it, but
we want to sampleMap the texture at
the offset coordinates. We can’t do that
until after we have calculated those
coordinates in the first pass.

I expect the syntax for the sampleMap in this
case would look like
SampleMap r1, r1.str;
SampleMap r2, r2.str;

Where can we place these arguments?
At the end of the first pass? At the beginning of the second pass?

In general if you call PassTexCoord, does that restrict your use of SampleMap on that
same texture coordinate?

Just change the 4 PassTexCoord in the second pass to SampleMap and it should sample the texture maps in the sampler stages 1-4 using the texture coordinates in r1-r4 computed in the first pass.

What exactly are you trying to do, that goes beyond the standard texture filters?

We are trying to use the graphics card to do some general computations. I know that OpenGL’s imaging extensions allow for convolutions and other image filters, but we want to see if we can use the GPU.
We’re starting with a simple Jacobi solver for the heat equation, but we haven’t gotten a handle on some of the more subtle aspects of the ATI_text_fragment_program.

Dependent texture read must be achieved in the second pass.
But if you bind the same texture object on multiple texture units, you can get pre-generated offsets thanks to applying a different TEXTURE matrix on each texture unit. In that case, you could sample “offseted” texels in the first pass.

Using multiple textures seems to be the way to go. The fragment program finally works!

Because the input texture is the
results from the last round, I would have
update all of the textures each round. Is there a cheap way of using one image and
binding it to multiple textures, so that
when I modify the image, they all get modified?

I’m using glCopyTexSubImage to grab the output from the framebuffer and I only want to do it once.

If all texture units bind the same texture object, doesn’t it work ?

Sure. You just bind the texture to multiple texture ‘units’.

Ie, somewhere in your code you should’ve called glGenTextures to generate one texture object.

Then you just go

Changes to the texture object automatically apply everywhere it’s bound because logically it’s just one set of state.