Hi,
I don’t have NVidia hardware myself, but have remote debugged via IRC with somebody who owns a Geforce 5900 (using version 6111 Linux drivers for AMD64 from NVidia).
I reduced my problem to a simple testcase with the following fragment program:
!!ARBfp1.0
TEMP tmp0, tmp2;
FRC tmp0.xy, fragment.texcoord[0];
SUB tmp2.w, fragment.texcoord[0].y, tmp0.y;
MUL result.color.z, tmp2.w, 0.05;
FRC result.color.xy, fragment.texcoord[0];
END
This fragment program is used on a quad that spans the entire screen, with texture coordinates ranging from (0,0) (top left) to (100, 50) (bottom right).
Here is a screenshot of the expected behaviour (screenshot taken on a Radeon 9700). However, on the NVidia-based system, the blue component is a uniform zero over the entire screen.
However with a different fragment program like this, the output matches the expected behaviour:
!!ARBfp1.0
TEMP tmp0, tmp2;
FLR tmp2.w, fragment.texcoord[0].y;
MUL result.color.z, tmp2.w, 0.05;
FRC result.color.xy, fragment.texcoord[0];
END
As you can see, the only change is that I have replaced FRC+SUB with FLR, but since floor(x) == x - fract(x) that should not make a difference. In fact, this relationship is used as a definition in the spec.
Am I right in that the two programs are identical, or am I missing something obvious here?
cu,
Prefect