benchmark differences

Hi,

I ran an algorithm on two different machines using OpenGL and tested them in exactly the same way. The first one used a GTX 260 on an AMD Athlon X2 2.8. The second one used a GTX 285 on an Intel Celeron 2.6.

I performed a set of linear algebra operations and timed the result for different sizes of linear systems (the systems were all powers of two).

The first computer had a logarithmic relationship between time and size, up to about 1000x1000 systems, afterwhich, the value of 2048x2048 jumped up in an almost linear relationship. The second computer had a logarithmic relationship up to a value of 2048x2048.
Oh and the results indicate that the GTX 260 outperforms the GTX 285 (weird).

I wonder if anyone can help me understand why the two GPUs would have different trends. I thought it may be due to the GTX 260 having fewer processors and slower memory, but then also took the CPU into account. How much would the CPU come into account for different linear sizes. What other factors should be considered?

Secondly, why did the slower GPU outperform the faster one?

Lastly, why do the results indicate a logarithmic trend, with a steep incline for the first small values (from 4x4 till about 512x512)?

Any suggestions would be really appreciated.
Thank you.