Any OGL + Xgrid experience here?

I’m interested in trading information with anyone that’s tried projects combining writing their own mac openGL app and then using it on an Xgrid. For example, writing a renderer in ogl, and then submitting a multi-scene animation to your own Xgrid for faster rendering results.

i think xgrid won’t improve your ogl apps performance since under normal conditions opengl uses your graphics hardware and xgrid will just distribute CPU power.
by the way the opengl drivers couldn’t handle such systems.
if you use software rendering i think it could work.

I’m not trying to accelerate one application, I’m implementing a render farm where each node uses GPU acceleration. Think of a render farm running NVidia’s Gelato. With hardware acceleration, it’s more efficient to render scenes than it is to render frames. So I want to explore using XGrid to network some XServes together (with nice OpenGL acceleration) and have capacity to re-render many file sets quickly.

I think what H.Stony is saying is that Xgrid distributes tasks to CPUs, not to machines, so you’ll end up with four tasks per Xserve, even though it has only one GPU.

Also, the Xserve (current model) only has a 64MB X1300, so acceleration by networking Xserves as opposed to buying a single Mac Pro with a decent video card may not be much if anything.

If you want to try it, you can use kCGLPFARemotePBuffer to create a context without requiring console access.

Yes, I am aware that xGrid distributes tasks to CPUs and not machines, and that each Xserve only has one GPU. Additionally, we’re looking at the option for a ATI Radeon X1300 256MB, which only adds $150 to the price… and will increase the render capacity.

Aside from the actual hardware rendering, there are a number of CPU intensive tasks our environment needs to perform. We are expecting that on a per-CPU basis, the GPU will be the bottle neck. Our hopes are that multiple instances of our software will be able to access the GPU one after another. The GPU will be maxed in efficiency, and the other CPU intensive tasks will prevent the other CPUs from being blocked from the GPU for too much of the time.

Think of it this way:
process 1 & 2: both start about the same time, preparing to render a scene.
Process 1: gets the renderer, starts filling RAM with the finished frames
Process 2: blocked from the renderer
Process 1: finished rendering, starts to encode the finished frames into multiple digital video formats and resolutions (our custom requirements).
Process 2: gets the renderer, starts filling RAM with the finished frames

Clearly a few things are in place here: the Xserves have enough RAM to store the finished frames (about 19 HD frames per scene - about 150MB of RAM to store a ‘rendered scene’); the same software doing the rendering has video encoding capabilities; we’re using an Xsan & fibre back channel for all the data passing around.

Our goal is to create a render farm with about 20 hardware nodes, capable of about 100,000 such renders per hour.

Yeah, this is a very non-standard application we’re building here. Don’t even think games or traditional 3D animation.

Yup and of course there’s SLI.

Sounds cool. The computation tasks should execute locally and be able to system call to the GPU so it should just work.

Context switching on a GPU can be expensive compared to a CPU. If you can try to share resources between contexts (like texture etc) It should help significantly. Some kind of manual scheduling between the software threads or just restricting the number of threads will probably be needed.

If you have SLI then dedicating threads to different GPUs should be attempted.

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.