I have a Radeon HD 3870 in my Windows machines right now, and it supports 8x color and depth texture samples. So either the cards they used in iMacs are slightly different, or it’s a limitation of the driver, not the hardware.
It must be a driver issue. It wouldn’t make much sense for ATI or Nvidia to cripple their cards for OSX. I believe the only difference between a PC and an OSX capable card is that the OSX cards support EFI when posting, and probably have some sort of special ID so that OSX will accept it into its ecosystem.
The other oddity I noticed was that the texture array layers is 256 on AMD and 512 on Nvidia cards - but in AMD’s and Nvidia’s own drivers, they are equal to the max 3D texture size (8192 for AMD and 2048 for Nvidia).
Also, there’s almost certainly a copy/paste type-o in the MAX_GEOMETRY_UNIFORM_BLOCKS for AMD cards - it’s reported as 64, and should be the same as the corresponding vertex and fragment shader values, 14.
Writing the depth to a color texture isn’t a good idea because you are still using a lower resolution depth buffer to test against.
It certainly won’t help resolve depth fighting issues, but it does give you a more accurate world position at longer distances. Granted, there’s only some instances where this might actually matter. In our case, the added precision helped the GL lighting more closely match our software renderer’s results when larger-scale scenes were used, at the cost of promoting an RGB32F target to RGBA32F.