The output variation of dFdx()

I just wanted to test the effect of dFdx(One of Built-in Functions in GLSL).This function is used to get Derivatives of variables.And I tested it in the way as follows.
vertex shader:


void main()
{
    gl_TexCoord[0]  = gl_MultiTexCoord0;
    gl_Position     = ftransform();
}

fragment shader:


#define lineWidth 0.02          
const float C_PI = 3.1415926;

void main()
{
	vec4 SinColor, BackColor, finalColor;
	BackColor = vec4(1.0, 1.0, 0.0, 1.0);
	float px, sx, x;
	x = gl_TexCoord[0].s * 2.0 * C_PI;
	px = sin(x);
	sx = (dFdx(px) / (2.0 * C_PI));

	float scaledS = sx;
	float scaledT = (gl_TexCoord[0].t - 0.5) * 2.0;

	if(abs(scaledT - scaledS) <= lineWidth){
		SinColor = vec4(1.0, 0.0, 0.0, 1.0);
		finalColor = SinColor;
	}else{
		finalColor = BackColor;
	}

	gl_FragColor = finalColor;
}

According to Calculus, “dFdx(sin(2πx)) = 2πcos(2πx)”,so I tested it as stated above.In my idea, if dFdx() is correctly supported by hardware, the curve of cosine can be displayed with above processing. But the final effect looks like a line which is parallel to X axis.
But when I modified the fragment shader as follows(just amplify
Y axis 350 times):


 void main()
{
	vec4 SinColor, BackColor, finalColor;
	BackColor = vec4(1.0, 1.0, 0.0, 1.0);
	float px, sx, x;
	x = gl_TexCoord[0].s * 2.0 * C_PI;
	px = sin(x);
	sx = (dFdx(px) / (2.0 * C_PI)) * [color:#FF0000]350.0[/color];

	float scaledS = sx;
	float scaledT = (gl_TexCoord[0].t - 0.5) * 2.0;

	if(abs(scaledT - scaledS) <= lineWidth){
		SinColor = vec4(1.0, 0.0, 0.0, 1.0);
		finalColor = SinColor;
	}else{
		finalColor = BackColor;
	}

	gl_FragColor = finalColor;
}

then the cosine curve was displayed correctly.(My graphics card is GeForce 8800GT.)
My question is why this phenomenon happend?
Anybody could give me some accurate definition of dFdx().
Any help would be appreciated,thank you very much!

dFdx() just returns the difference between two neighbouring fragments, so in mathematical terms it’s not dF/dx but just dF in the direction of the X axis.

GPUs process 2x2 fragment blocks to be able to calculate this difference, i.e. within such a block dFdx will return the same value for all four fragments.

And newer gpus process in 3x3 blocks.

I wonder, if you render 1 pixel only, do the other 8 also compute (but computing ignored garbage)? From what I’ve read, they do.

I don’t think so. Certainly most GPUs process larger groups of e.g. 16 or 32 pixels on a shader processor (or whatever the terminology of a specific IHV may be) but for the purpose of calculating derivatives (either for dFdx/dFdy or texture LOD) every GPU I’m aware of uses 2x2 blocks.

I wonder, if you render 1 pixel only, do the other 8 also compute (but computing ignored garbage)? From what I’ve read, they do.

Yes.

Thank you for teaching. I’ve never thought about it from the view
of fragments. I find that the “350” is just the numbers of fragments(pixels) in the X axis for a entire circle of consine fucntion in the scene I setted(measured by glDrawPixels).
I think that it’s just the meaning of “but just dF in the direction of the X axis”. The final output of dFdx() depends on every fragment(or 2x2 fragment block) in the direction of the X axis. Am I right?