MRT and depth encoding on ATI

I am trying to encode depth information as well as color, normal and other nifty things.
The problem is that on ATI, whenever I read from glFragCoord.z, the driver switches to software mode.

Short of throwing my card from the window and running the to store to make nVidia a little richer, is there another way?

You can use an extra interpolant to pass the fragment position from the vertex shader to the fragment shader.

It might be worth making sure point and line widths are set to 1 when trying to get the fragment position. I had a similar problem that was caused by this.

Are you using a depth bias?