when will software renderers be viable?

Originally posted by V-man:
[b]Just curious. How fast can a P4 3.0 GHz (or equivalent AMD, G5, …)
render a single texture, non-lit cube in software mode?

Can it do a minimum of 60 FPS?
[/b]

I was getting about 40 fps on Pentium MMX and sofware OpenGL implementation from SGI. So I guess, P4 would be much much faster.
Half-life 2 was playable in software and looked not too ugly on a Celeron 900 640x480 some years ago.

Half-life 2 was playable in software and looked not too ugly on a Celeron 900 640x480 some years ago.

Really?

Run you’re favorite app with Mesa and then with the drivers of your video card. That’ll show the magnitudes of difference.

> Extensions like MMX SSE and SSE2 have
> tried to close the gap a bit but it
> just can’t get you there, there are
> very domain specific pipelined
> hardware optimizations dedicated to
> graphics performance on a GPU.

dorbie,

I can read, I promise. I think you are missing my point which is that there is a limit to how much graphics quality most people are going to want. How much better than photorealistic can you possibly ask for?

In only a few years, I think a general purpose CPU will provide the best possible photorealistic images possible on existing displays. At that point I think it will make sense to have an integrated GPU.

i just realised this forum has a “davepermAn” in…

don’t mix us two, please… thanks

How much better than photorealistic can you possibly ask for?

much bether than doom3.

oh, and, all in all:
dedicated hw will always outperform gpu’s (general processor units ) in terms of efficiency (if not performance, then energy consumation, heat, etc…)
gpu’s on the other hand will always be the place to develop new stuff on, because they always outperform dedicated hw in terms of scalability of features.

the question is… will gpu’s one day be fast and effient enough to drop the need for dedicated hw?..
always depends on situation…

if it will be fast enough one day, then it will have to scale down energy consumation. for laptops to be useful there as well

if it gets that, we want it even more down, for pda’s, and cellphones, and just about everything (cellphones merely voice2text text2voice and similar, all with high compression etc…

dedicated hw will never die. gpu will never die as well

this topic is not about rastericing <=> raytracing, or precicion, or anything, as davepermAn stated.

oh, and, if you know softwire (softwire.sf.net), then you know what a todays p4 can do… yes, a simple cube can run at 60fps or more at high resolutions, too. you can have q3 levels smooth… todays cpu’s are rather fast. no gfFX/9800 replacement… but anyways… combined with the scalability in features (that allow for much more efficient processing of data for example… thinking of direct access to all data from the engine and the renderer at the same time and other stuff…, it’s quite nice)

Just curious. How fast can a P4 3.0 GHz (or equivalent AMD, G5, …)
render a single texture, non-lit cube in software mode?

Pretty fast. Download the pixomatic demo and see: http://www.radgametools.com/pixomain.htm

I can read, I promise. I think you are missing my point which is that there is a limit to how much graphics quality most people are going to want. How much better than photorealistic can you possibly ask for?

In only a few years, I think a general purpose CPU will provide the best possible photorealistic images possible on existing displays. At that point I think it will make sense to have an integrated GPU.

I think you have a naive idea about the computational power required for “photorealistic” rendering. We’re not two or four years away from it as you seem to think.

For example, the average render time per frame of LOTR was 2 hours (see http://www.wired.com/wired/archive/11.12/play.html?pg=2 ) . If you want to do that at 30 FPS, you need to speed things up by a factor 216000. If CPU speed doubles every two years, you’ll have to wait about 35 years before those frames will come out in “real time”.

You are right to say that, once photorealistic rendering can be achieved on CPU and GPU, we won’t need a separate GPU. You are way off, though, in how long you think it will take for that to happen.

[This message has been edited by Zeno (edited 02-05-2004).]

> For example, the average render time
> per frame of LOTR was 2 hours

LOTR was probably ray traced I’m guessing. I don’t think our computers will be able to ray trace in real time with photorealistic quality anytime in the next decade, not while using known ray-tracing algorithms, anyway. That’s not even on the radar in my opinion.

So when I said photorealistic, I meant high enough res and high quality enough textures that the images look photographic even using standard lighting techniques.

Just to test if I was really far off I dug out Ye Olde Winquake and ran a timedemo. Quake 1 was released when 320x240 was considered a “good” resolution and 640x480 was totally unplayable.

On my 650 mhz AMD chip PC I just ran a 640x480 winquake timedemo at 55 fps. On a hot new PC I bet winquake could easily break 100 fps. When did Quake 1 come out? mid 90’s? So it’s been around 8 years I guess.

All else being equal that, a simple projection suggests that in 8 years we will be able to run the current hot new game (Doom 3, almost now) in a (theoretical) software-only mode.

Of course all else is not equal but I’m guessing that the 3D industry is much more mature now than it was in the 90’s. I think 3-4 years is realistic for expecting a CPU to do Doom 3 quality graphics, especially if Intel and AMD keep enhancing the SSE-type instruction sets.

[This message has been edited by gltester (edited 02-05-2004).]

LOTR was probably ray traced I’m guessing.

True raytracing is almost never used in production. Too slow and not useful enough, apart from doing realistic glass or water. Pixar’s Renderman is used very often, and it is not a raytracer at its core.

I wonder what resolution they render to movie screens (film) at? I don’t know, but it’s probably massively higher than a computer screen.

Graphics cards seem to have been advancing much faster than computer screens in both image quality and frame rate, and if so GPUs will catch up (and be limited by) screen quality sometime soon.

People say “oh my L33T 3D card runs Quake 3 at 500 fps”. Maybe, but their whole computer didn’t. Maybe it responded to their inputs 500 times per second but the monitor or LCD has a fixed refresh rate much lower than that…

[This message has been edited by gltester (edited 02-05-2004).]

At the lowest level, rendering will always be a massively paralell problem. (Just think how many photons are used to “render” a scene in real life.) For that reason, it will always be massively inefficient to have a single CPU do the job. Perhaps a super-ultra-hyper-threaded processer, but then you have GPU.

Also, whereas floating point arithmetic is a solved problem, rendering may never be. You can always add more complexity to a scene and always better approximate the way light behaves, but it will always be an approximation. For that reason I don’t see the FPU/GPU analogy holding water.

Originally posted by endash:
You can always add more complexity to a scene and always better approximate the way light behaves, but it will always be an approximation.

That I do agree with completely. Even if the GPU is effectively limited by the resolution and speed of the display device, you can still keep adding geometry, as much as you want.

Of course, their is a limit on the geometry too. Not in how much you can add, but in how much people will notice and/or care about.

When you are to the point that you are rendering every last little nail and tack in a building or screw in a vehicle, when every bone and ligament in the human body is being modeled somewhat accurately, does anybody care if you can add more detail? Will increasing the geometry help you sell more games? Nope.

Not while we are still using sub-4-megapixel displays, not one bit.

Originally posted by gltester:
I wonder what resolution they render to movie screens (film) at? I don’t know, but it’s probably massively higher than a computer screen.

Common movie resolution is usually (depending on your budget) 2k, 4k, 8k or 16k. That is 2, 4, 8 or 16 thousand pixels wide with the number of vertical pixels depending on the required aspect ratio.

Originally posted by crystall:
Common movie resolution is usually (depending on your budget) 2k, 4k, 8k or 16k. That is 2, 4, 8 or 16 thousand pixels wide with the number of vertical pixels depending on the required aspect ratio.

So LOTR was probably 144 megapixels by my estimation (16:9 aspect ratio with a high-budget 16k pixels wide). A 1600x1200 computer screen is less than 2 megapixels. A very simplistic estimate would be that the LOTR renderer could have generated one frame at computer screen quality in around a minute and a half.

Still way too slow for animation, and they probably had a whole render farm doing processing instead of a single PC, but I hope somebody can see my point which is that we are not all that far away from a practical quality limit. In a few years, increasing your game quality will require:

  1. actual game design, not just hot graphics
  2. waiting for hardware advancements other than the 3D card (monitor, CPU)

In many/most cases the CPU should be able to generate as much quality as the screen can reasonably represent. Even if not, we are talking about adding a cheap $50 3D card, not a $300 monster, in order to max out the display’s abilities.

my opinion anyway

Possible improvements: Increased resolution, antialiasing, lighting, global illumination, high fidelity BRDF, caustics, atmospherics & volumetric illumination, weather, motion blur, HDR & adaption, depth of field, hair & cloth with full collision, global physics and materials, fluid dynamics, all of the above interracting, the list could go on ad nauseum and all are problems that may be solved on the graphics cards of the future but always with compromises because the computational requirements are effectively limitless.

Do not assume that LOTR is the pinnacle of graphics achivement, it won’t be despite how impressive we all find it. Movies make all sorts of short cuts that true 3D environments cannot, in addition they are often hand crafted in many ways, rendered in pieces and composited.

More importantly LOTR was a movie, it was not ray traced, or rendered, it was predominantly FILMED, it could not have been made entirely with CGI even with todays best technology. Looking at a movie and citing it as an example of where ‘real-time’ graphics can go shows that even the current offline rendering technology doesn’t meet your standard for ‘viability’. It reinforces the belief that it probably never will.

I’m not even going to comment on the jaded gameplay criticism except to say you don’t get a lot of people playing space invaders these days. There’s a good reason for that.

It just boggles the mind that someone would say we will be waiting on advances in the CPU rather than the graphics and that the CPU will outstrip GFX performance. Ignoring the contradiction, we already wait for CPU memory and bus advances. See all the comments above as to why this is unlikely to happen any time soon, none of which you’ve directly addressed.

[This message has been edited by dorbie (edited 02-05-2004).]

This has drug out longer than I intended at my original comment but as an example of where I’m coming from I’ll pick on your example of caustics.

I’ve seen pretty cool water effects using nothing but multitexturing. Sure there may be a million ways to make water look better but we are no longer talking about advances as significant as, say, transparent water was in the original GLQuake.

What I’m saying is all the low-hanging graphics fruit is already been picked. The jump from Doom 3 to Doom 5 (or whatever) will be a tiny hop compared to the jump from Doom to Quake 3 was. It is a mathematical certainty that people are going to notice less and less every time we come up with yet another effect, and not just because they are jaded, but because we are approaching a limit called “looking alot like reality”.

When Space Invaders was new people were like “WOW a computer that draws pictures!!!”

Now we are at “oh this computer’s pictures have better improved light reflections in the pools and look the people’s joints move realistically now”. Not quite as exciting and not as important to sales, IMO.

[This message has been edited by gltester (edited 02-05-2004).]

Originally posted by al_bob:
[b] [quote]Half-life 2 was playable in software and looked not too ugly on a Celeron 900 640x480 some years ago.

Really? [/b][/QUOTE]

Yeah, i put my hands on some secret alpha codes

Damn typos… I guess I’m be waiting just too long for this game now…

Thought maybe a concrete example would help since I don’t seem to have explained myself in a way that you can at all respect:
http://graphics.ucsd.edu/~henrik/images/metalring.jpg

This is basically a small ray-traced picture with some caustics. You and I look at this copper ring with it’s relected light pattern and think “coooooool” but for most of the public, (the mass market), they probably wouldn’t even notice if the light pattern was missing. Even without using any ray-tracing, Doom 3 looks almost as good as this picture does.

All I’m saying is we are rapidly approaching a point where GPUs will be able to easily do all of the effects that most people notice, and that CPUs won’t be far behind afterwards. Somewhere in there we may move the GPU onto the same die as the CPU. Or possibly we’ll just drop the GPU altogether and do CPU-only rendering. Most people will be happy with that. The rest of us will shell out for hot graphics cards.

I’ve known plenty of people who are totally unable to tell the difference between a game running at 25 fps or running at 100 fps. One programmer I used to work with had his damn monitor hooked up thru a damaged cable causing visual echos and also configured at 60 hz refresh rate for like a year without noticing it until finally I made him switch cables.

People are not as perceptive as we are here, and the state of the art is very close to limit of what most people will spend $300 to “fix”. In a couple years, standing 10 feet away from the monitor, you won’t be able to tell Doom from a photograph.

Yes but my projective texture caustics weren’t real caustics. It doesn’t model true refraction and that is a class of problem that if you wanted to go beyond eye candy and do it correctly would be more expensive. Consider caustics from a dynamic surface simulation with a real interracting object and accurate treatment of depth. It’s a volumetric problem. With all of these things there are degrees of fidelity and you can always crank it up a notch to get it looking or behaving more accurately. For some stuff it doesn’t matter for other features and scenarios it’s vitally important.

Originally posted by dorbie:
… For some stuff it doesn’t matter for other features and scenarios it’s vitally important.

Yes that’s all true.

And maybe there are some games where caustics really would matter. Not your typical FPS or flight sim though. I could see how a game like Myst could look AWSOME with some good caustics, and it might even be a critical part of the game.

RTS games might reasonably need to show 100,000 troops moving in different formations on screen (much like LOTR). I can see that. But even a couple years from now, computer screens will still only be maybe say 3200x2400 if we are lucky.

That’s only 77 pixels per troop, with no room left over for ground or anything. I don’t care if you’ve got the frigging CIA’s secret supercomputer in your GPU, you are still limited in visual quality by the monitor at that point, and there’s nothing you can do about it. No amount of geometry jammed into the GPU will make up for the shortcomings of the monitor.

AND once we hit that point, GPUs won’t need to get much better, and CPUs will start catching up in abilities.

[This message has been edited by gltester (edited 02-05-2004).]