high performance software renderer

What is the best high performance software OpenGL implementation? I don’t understand why games such as Quake/Half Life are able to have software rendering as fast as hardware rendering. Surely, if we turn off texture filtering, other eye candy and lower the resolution, a software based OpenGL renderer should be just as fast as a hardware one. Plus, in the near future when we see astronomical gains in cpu/memory speed, graphics rendering might as well move back to the cpu. Just look at sound cards and modems. They used to be mostly separate pieces of hardware until now.

BTW, when do you think a reference implementation of OpenGL 2.0 will come out (preferably Mesa)?

It’s not OpenGL (actually DirectX), but check out Nick Capens from the flipcode forums’ SoftWire and swShader work. Pretty impressive stuff.

Ten years ago, when Quake came out, software rendering might have been competetive with hardware rendering. That’s just not the case anymore. A 500 MHz GPU with 16 SIMD pixels per clock and 30 GB/s internal bandwidth is going to smoke even a 4000 MHz CPU with 8 GB/s bandwidth and just a single pixel per clock capability.

If you don’t believe me, try to benchmark a Pentium 4 with Intel Extreme graphics against that same Pentium 4 with a GeForce 6800 or Radeon X800 :slight_smile: And the Intel Extreme even does hardware rasterization – it just uses the CPU for vertex transform.

Not to mention, when you have a hardware implementation, you have the entire CPU for simulation, AI, and other things; you don’t have to share it with rendering.

If you’re hell-bent on getting a software rasterizer, I suggest licensing Pixomatic from RAD game tools. It’s not exactly the OpenGL API, but it’s as fast as you can get software graphics these days.

Right jwatte, but you forgot that hardware is literally harder than software. I don’t think they will bother creating an uber complex GPU that will allow for real time RenderMan which in my opinion is is the perfect rendering API (hasn’t changed much since 1980) even though it wasn’t designed for volumetric rendering, because it would just be too complicated.

That’s not what NVIDIA would have us believe: http://film.nvidia.com/page/gelato.html

Just look at sound cards and modems.
lol, do you actually believe that sound, network, modems, SATA control is done by the CPU at 100%? Just because it gets integrated to the motherboard ?

Specialised hardware is very effective for a lot of specialized stuff. High perf graphics is still too demanding for our CPUs.

Yes, let’s look at software modems: They suck. The use too much CPU, stream my system bus with interrupts, and often drop connections. I stay the hell away.

How about software sound? Well, the software sound cards only play a single stream, so the games end up doing the mixing. Which leads to lower frame rates. Not to mention that most on-board sound chips have a horrible hiss floor at -50 dB or so, so anyone listening at decent volumes will want a real sound card.

But, that being said, a sound card plays about 500 samples per frame. Most CPUs can deal with that these days. A graphics card chews through between one and sixteen million samples per frame (depending on resolution) – it will take quite a while longer until general-purpose CPUs catch up. If at all.

I don’t think they will bother creating an uber complex GPU that will allow for real time RenderMan which in my opinion is is the perfect rendering API

Have you looked at the GeForce 6800? For all intents and purposes, it can run RenderMan in real time, although you have to translate your renderer to GLSL or HLSL.

What about the multi-core future path that we are going to be forced down soon?

Given that taking full advantage of multiple cores doesn fit easily with todays object oriented programming, maybe cores can be dedicated to sound or other video processing, and some of these hardware devices.

One 4gig p4 will never beat an x800, but 64 of them?? :smiley:
myabe then we can dump rasterisers and get onto propper reltime raytracing… mmm propper reflections :cool:

M.

Wow, I never knew the 6800 could be fast enough to do RenderMan to GLSL in real time (I’ll try that soon). I always thought that the reason why we can’t achieve hardware RenderMan is as follows.

With RenderMan shaders, physics is often done in shaders. I believe that current hardware shading languages forces you to separate rendering from arbitrary physics since if a shader involves arbitrary physics (i.e. custom ray tracing) non-rendering data has to be accessed and I believe current hardware shaders doesn’t let you access system memory for certain operations which means you would have to upload physics data to the graphics board which is impractical.

Again, we could always make the hardware more complicated to resolve that issue, but we can only go so far before the GPU becomes no more faster than a general purpose processor.

http://sjbaker.org/steve/omniv/cpu_versus_gpu.html