Multiple craphic card rendering with PCI express


I hope I’m not offtopic. If so pls close/delete the thread!

Because of the upcoming technology PCI express it will be possible to have more than one 3D graphic card in one PC.( know that this is possible today too but there are no really good cards with PCI (non-express) interface and PCI (non-express) is VERY slow.

So here are some question:

  • Is it possible to get for each graphic card a device/rendering context and if so, how?

  • Is there any possibility to transfer framebufferdata from one context into another (including color, depth and stencil data)? My idea was simply to split the rendering primitives to the 2 contexts and then merge them together.
    I know that it is possible transfering the color and depth data to a texture and then write a simple fragment program to write the values in the “primary” context framebuffer. But is there (today) any way of doing that simpler? Maybe superbuffer extension will add support for such things.

  • Are texture, display lists, … shared between rendering contexts which don’t have the same device context?

  • The high-performance grapgic cars of NV (FX5950 Ultra) and ATI (Radeon 9800XT) cost about 450 euro. A middle class card (Radeon 9600XT or FX5700) cost about 160 euro. The middle class cards have about 70% - 80% of the performance compared to the high-performance cards.
    So here’s my idea: instead of buying a high-performance card you buy two middle class cards. If you add support for more craphic cards this will give you a performance bust of nearly 2. So you have about 140% performance for less money.
    Ok, I know that developing such application surely costs resources but I think it will be profitable.

What do you think about that?

Florian Rudolf

[edit] spelling

[This message has been edited by Corrail (edited 03-02-2004).]

The specifics of creating a context are platform specific.

I’d assume that if a window is entirely on a single device (screen) then the driver for that screen gets to create the context. Thus, create one window per screen, and you’ll get one context per screen.

On systems with “unified” drivers, and thus the same driver for each card across the slots (assuming the same manufacturer for all the cards) then a single window MIGHT be able to span more than one device, but I wouldn’t depend on that.

Note that PCI Express comes in “x” variations, where the default is “1x” which is only 2x the speed of PCI; this is for regular expansion cards. The “16x” version for graphics cards requires more pins, and thus most controllers will probably just support one 16x slot, plus some number of 1x slots. After all, it’s possible to support more, but whether it’s economical is another matter… Maybe some workstation chips will come through for you.

Multiple graphics cards per system have been touted as a killer feature of AGP3.0, too, but that never happened. I wouldn’t expect it’ll happen for PCI Express either. It’s allowed by the spec, sure, but that doesn’t mean there will be motherboards with two sixteen-lane slots.

Synchronization between separate cards isn’t trivial. It’s not the Voodoo 2 era anymore, where everything a graphics card had to do was rasterizing spans. Render to texture is one of the more obvious problems, but there are other subtle economical issues (like replicating vertex storage and processing without any performance gain in that area). It’s just not that worthwhile.

Look at the dual chip Volari V8 Ultra and see how it stacks up to a single chip version. There’s nowhere near enough performance gain for all the effort involved. And these are two chips on a single board.

One other thing I’d like to comment on is moving data between cards. AFAIK PCI Express is a star topology where every transfer between devices must pass through the host (the “northbridge” or something). This might turn out to be another bottleneck, depending on implementation. Ie, a single 16x card might possible saturate the “northbridge”, even if it supports dual 16x slots.

hm Thanks a lot… It seems like there are more problems as I thought…