(basically, you just crank up your FOV and resolution, and pretend that it’s all-the-same to the GPU)
“pretend”? It is “all-the-same to the GPU”. That’s the whole point of Eyefinity; that an application rendering to many monitors works almost identically to an application rendering to one. This makes it better than multi-GPU configurations because the application doesn’t have to do any work.
The only Eyefinity code you need to do is querying the size of the desktop and setting up an appropriate aspect ratio. Which you should be doing anyway, simply due to the various different monitor resolutions out there.
If so, then seens this setup is interesting for simple WoW-style games. But if your app really pushes the GPU (lots of verts, lots of fill, lots of view-dependent texture, high AA, different projections per display, etc.) then it seems it would bring this setup to its knees.
Well, yes it would… today. What about later, on the HD 6970 or next year on the HD 7970? It seems rather close-minded to say that, just because <insert program here> exercises current hardware too much to make it work means that Eyefinity’s ultimately useless.
Really, when you get down to it, hardware is running out of stuff to do. Tesselation is the main feature of DX11/GL4.0, and it’s not even that useful overall. All of the easy stuff is taken, and even that isn’t going to be used by the majority of applications. Developers simply don’t have the resources to use much more stuff. Implementing the hard stuff for developers requires a lot of artist input (ie: more money) and really starts approaching the Uncanny Valley unless you do it exactly right. Even if hardware started supporting more features, they wouldn’t be widely used simply due to lack of need.
Eyefinity is an effective way to use hardware that the developers aren’t/can’t use.
Also, not everyone cares about pushing hardware. Not everyone is trying to make the longest, most complex fragment shaders imaginable. Some of them just get to the point where what they have is “good enough” for their needs. Would you deny Eyefinity to these developers and their users?
Bumping up the resolution dramatically can also improve the immersive experience of playing the game or walking through a simulated environment. Do not discount the impact of being able to turn your head and seeing more of the world.
Also, the number of vertices doesn’t scale with resolution, unless you’re somehow creating more vertices for higher resolution. So vertex shader costs only increase in the sense that you’re rendering more stuff with a wider FOV.
And if you’re rendering at a high resolution, you don’t need as much (or any) anti-aliasing. You also get more efficiency out of your textures (assuming regular texture access patters), though you’ll be accessing the higher mipmaps more often.
All this means that performance does not scale linearly with resolution. It only does so if you’re fragment program execution bound.