Parallel/Distributed OpenGL Rendering

On my forums, http://www.openglforums.com, I’ve started a thread that has to deal with distributed OpenGL rendering.

I’d like to start a discussion on a more advanced topic about rendering extremely large objects (polygon wise). Objects in the millions or billions of polygons. This is what I am using for my science fair project next year and I’d like to get a huge jumpstart if I could.

If you’ve read the Archived news on the site, you know that I am writing an engine called maelstrom. Now, maelstrom doesn’t aim to be the next Quake II engine or something like that, its not going to be the next FPS engine nor will it be optimized for games even. I am trying to make the engine as unique as possible. Eventually I’d like to release it commercially, but for now I’ll keep everything open source.

I’d like to start the discussion with a more theoretical aspect and then move into the actual code behind what I am aiming at. Although this might be a somewhat advanced topic, all are invited to join in. And as always, all help is appreciated as are constructive comments.

Now, maelstrom aims to be able to render extremely high polygon models in a short amount of time. How do I propose to do this? What I’d think would be best is if the engine could distribute the rendering over a network and then each client would report back the rendered information to a master server which would then display as a raster image. Thus, if you had a billion polygon model that you had to render, say a human heart for example (I’ll get to this later), and you had 100 workstations that could render 10,000,000 polygons apiece, you have really cut down on your rendering time. Assume all other measures of efficiency have been taken such as frustrum culling. Each client machine stores the rendered part of the image in a buffer which then sends the actual raster image back to the main server. The main server then compiles these 100 pieces of an image into one whole piece which is then displayed.

Since the server does no actual OpenGL rendering, it would not have to be a workhorse machine. In fact, the entire point of the engine would be to make a renderfarm that doesn’t have to be filled with the greatest machines of all time to render things.

Now, where does this come in use? Well, like I said, if you had a heart you had to render that was a billion polygons, it would take forever on a single machine (well, some could do it, but they would cost in the billions of dollars) so this idea is not feasable. Say a doctor has to do a heart surgery, but before he performs the actual surgery, he can simulate it with a digital heart the computer displays. At the hospital, there could be a mini-render farm that would take a scan of the patients heart and then render it out into a 3D image the doctor could use to simulate the surgery. Thus, any mistake the doctor made could be corrected and there wouldn’t be any lawsuits afterwards.

Another application this would be useful for would be to simulate realistic terrain. Sure there are terrain engines out there that can render pretty nice realistic terrain, but say NASA was doing a space mission to Mars and they had to render the actual terrain from the Mars mission. NASA could collect data from the Mars surface via satelites, and then use an engine like maelstrom to render the actual surface of Mars to enable the astronauts to train extremely well as they will already know what they are expecting. This could save millions, if not billions, in lost or broken space equipment and could also save lives.

I really think as 3D becomes more realistic, we will be forced to move into rendering like this. Using this type of rendering, one can still keep prices of computers down since high end machines (relatively) are not needed. More realistic scenes can be produced at the speeds we are all used to.

I would love to expand on my ideas. If anyone else has anything they want to add, feel free to.

Thanks!

The key issues it seems are how do you distribute the data and then how do you composite the image after rendering and what is the infrastructure required to transfer that data around vs how many data copies do you need over the various rendering nodes.

There are many approaches, but you end up trading copies of data on the nodes for image bandwidth at the back end. Either you have multiple copies of data or you must compose the full image with zbuffer information. At least for a classic style scene. There are other approaches, for example each graphics node holds a leaf in a BSP tree and you can then compose without z but you still need the full image transported or perhaps you need only transfer the pixels touched by the leaf data if you’re smart about it. That seems like a decent compromise but it may not be the most efficient implementation for immersive scenes.

There has been work done on this already, some of which I can’t discuss.

Have you heard of WireGL for example?
http://graphics.stanford.edu/software/wiregl/index.html

Do you use software readback and network infrastructure to transport the images for composition or is there some kind of hardware video composition to support your system?

P.S. there’s more to simulating these things that the graphics. It is unrealistic to think more polys will solve all the challenges.

[This message has been edited by dorbie (edited 06-08-2002).]