Moving back to software - when?

Hi everyone,

Sorry for a slight offtopic…
I’d like to hear your opinions on when (and why?), if ever, will we move from hardware acceleration to software renderers for stuff like games and other advanced visualization systems. For last several years we can notice a more or less stable speed ratio for software/hardware renderers, so it’s more of a question how do you suspect the future to be and if you have some interesting pros or cons for both of the available options.
Since every year the CPUs are getting about 70% faster (don’t know exactly how GPUs behave, but I think it’s even more), that’s been ‘proven’ by someone, that in 2020 we will reach the highest possible CPU speed, because of their physical construction constraints.

Thinking ‘logically’ one would say, that since GPUs are in recent years constantly faster than CPUs, why wouldn’t they be faster for all the next years. But look at Java vs. C++. The latter is generally constantly several times faster than the first, but as the time passes, we are soon expecting succesful Java games too, aren’t we? (or maybe there are already some) The point is that not many would expect it to happen in the old times of software renderers. There are simply always multiple points of view, which I’m trying to consider too.

Thanks for any input.

If you put it that way, your question becomes absurd. Where is the border between hardware and software rendering? Of course, if we have a 100-x core cpu with low latency memory acess and hight-performance math, we may throw our gpus away. But don’t you think it’s the same thing we have now? IMHO, the graphic demands raise much faster then the cpu technology - so I guess there will always be hightly parallelised units to process graphics.

Exactly, you’re thinking of what’s possible in gfx today and saying that in future we’re able to do this in sw at the same speed while neglecting to realize that in future we won’t be doing the same things that we do today. There is also the issue of 3rd parties saving us time that you have to think about. Most don’t want to give up the black box and rather do more important things with their time. Also, the last time I checked we’ve been hovering around 2ghz cpus for a while now. Thus the whole move to multicore cpus which become reality this summer. Even if you had infinate cpu power you would use it up with infinate bloat :slight_smile: Look at large capacity harddisks we have today. Back in '98 I think I saw a 1gig hd on the shelf and I thought that if we had 400gig then we wouldn’t know what to fill it with. Well, sw size has increased to where a game when installed takes over 1 gig or space. Unthinkable back in '98. That’s how it goes.

Not in the foreseeable future.

Graphics hardware will always be faster than software on general purpose CPUs.

Graphics is a very specialized task with performance gains from application specific architectural features like on chip coarse z memory management. Graphics it is also inherently scalable thorugh parallelism. It’s also nice to be able to perform 3D gaphics in parallel with your application/game code instead of wasting a CPU to do it. In fact good software on the CPU in unison with good GFX hardware can further enhance performance.

Originally posted by JD:
Exactly, you’re thinking of what’s possible in gfx today and saying that in future we’re able to do this in sw at the same speed while neglecting to realize that in future we won’t be doing the same things that we do today.
Umm, I think I was misunderstood. Blame me for my english, but I’m far from thinking the way you suggest I do.

What I want is to hear what do you think is the direction we’re currently moving to with the 3d graphics. I also wanted to point out that it’s very difficult to tell the future, at least when talking about informatics. Even though it’s difficult, maybe some of you have some good guesses what will get changed in future.

The point about parallelism is definitely a good one. But it’s noticeable that there’s usually some evolution level at which informatics sacrifice speed increase over usage convenience / simplicity. Real-time computer graphics is the very critical one about performance, so this might not always be the case, but who knows.

Well if you want flights of fancy imagine a system where the GPU is a coprocessor with close affinity with the CPU(s). Both share extremely fast memory systems and caches and neither are bogged down by the legacy x86 design.

The CPU can almost instantaneously access the GPU registers and cache and vice versa and some software code just compiles to vector like instructions some of which runs on the GPU effortlessly.

GPUs are heading more towards general purpose instruction architecture including parallel MIMD execution with memory access to make the rapid processing of memory coherent data efficient.

When will CPUs be replaced by the GPU?
Or when will they merge into a single system on chip design?

Some of this is more feasible than you think, but parts of it seem vanishingly unlikely unless you’re talking about a complete design departure like a console design.

I was thinking about the similar thing…

that’s been ‘proven’ by someone, that in 2020 we will reach the highest possible CPU speed
flatearth society member.

ultimately of course there will be just the one processor (technically prolly eg a million subprocessors running at 10ghz each of something) though the question is when 2050? 2100?
cpus in partnership with gpus are gonna be around for a while yet

If we have a lot of cores that can work simutaneously, and can be programmed to do a lot of floating point operations in single clock cycle, and a lot of hardware registers to avoid frequent memory accessing, with a large memory bandwith. Maybe we can move to software rendering.

No, there’s more goes on in a GPU than just a lot of fp parallelism and registers.

There are fundamental reasons an ASIC does a task better than a CPU. A GPU is an ASIC, it has optimizations that need hardware to be efficient like many pipelined heterogeneous operations sequenced per clock, coarse z, perfectly timed pipelining w.r.t. the memory fetches, in chip FIFOs between stages, domain specific cache architectures and memory addressing specific to the types of memory access. It doesn’t matter what you claim your CPU can do. GPUs will evolve too, you wind up talking about firmware running on CPUs that have evolved to look like a GPU, and that makes no sense (is less area efficient) for the ‘immediate’ future. If flexibility and configurability was your main objective then maybe you’d have something, but it’s a simple case of horses for courses and being economical.

I’m pretty scared by the fact CPUs could emulate GPUs. Well, there could be advantages. Every portable PC will have “efficient” (in the sense of “working”) glslang and stuff but I hardly believe it will be able to outperform GPUs.
I heard intel already does this on integrated graphics (only vertex processing however).

I actually think GPUs are grown so fast because of their “stream” programming model which allows to do scale with parallelism and assume different things from CPUs. By contrast CPUs are designed to handle “flexible code”, jump here and there on the memory.

I perfectly agree with what I read on the net some days ago… GPUs and CPUs will likely converge on a single feature set in the (surely-not-near) future, but there will be algorithms which run best on the other kind of processor.

By sure means, integrated graphics vendors have all interest in melting the two chips in a single one, but I hardly believe CPUs will kill GPUs or vice versa.

Originally posted by Obli:
I’m pretty scared by the fact CPUs could emulate GPUs.

What about the fact that GPUs could emulate CPUs? I started programming back when the GPU was little more than a gateway to the framebuffer, and all this functionality has now been subsumed from the CPU. Why did they evolve like this? Because GPUs are optimized in ways that make them extremely fast for graphics programming, but extremely slow for everything else.

I perfectly agree with what I read on the net some days ago… GPUs and CPUs will likely converge on a single feature set in the (surely-not-near) future, but there will be algorithms which run best on the other kind of processor.

This would be a step in the wrong direction. The very design decisions that make GPUs fast would make CPUs slow.

What I believe we’ll see in the future is more dedicated cores - we’ve already seen dedicated sound processors, and now we’re seeing the introduction of physics processors. I believe that we’ll see all these, and more, located on a single motherboard and connected via some communication flexible system (perhaps a Hypertransport or PCIe derivative). Several specialized cores, and a few general purpose cores. Hints of this can already be seen in next generation consoles and CPU roadmaps.

My view is that we will indeed move towards software, again.

Now, now, hold your horses. I didn’t say software run on the CPU. :slight_smile:

I hope the evolution we have seen with VP and FP is just the tip of an iceberg of the generalized programmability GPU’s will provide in the future.

As memory requirements and consumption increase with more detailed … everything, I think we’ll start to see much more generated-on-the-GPU “stuff”, e.g. terrains and textures, from functions supplied by the programmer to the GPU. GPU speeds have increased way more than memory size on cards, tipping the balance of the always present computational speed vs. storage requirements tradeoff more towards using (GPU) generated stuff, lowering the relative storage requirements.

What I hope we will see more of in the future is smarter partitioning of memory between GPU and CPU, so that gfx cards would only require a baseline amount of memory on-card (in addition to, like any decent CPU, caches), allowing n bytes (say half a gig if you want a number) by chipset be decoupled from the CPU RAM bus and temporarily given to the GPU to play with. Perhaps RAM will fall so far behind processing speeds of both CPU and GPU (like it wasn’t seriously behind today :slight_smile: ) that dedicated packet-switched networks to RAM will be required, making such complex partitioning logic unneeded.

The potential benefits of such a setup are too numerous to list. Suffice it to say all parts of a system, not to mention users of such a system, would benefit.

But whether we’ll start to see this change in memory architecture in our active lifetime or not, I’m sure we’ll see way more software solutions allowing way more flexibility for visualization than we have today - and more and more of it will be run on the GPU.

I hope future GPU’s will become more of complete visualization systems where the CPU can e.g. upload a scene/frame description and let the GPU handle translucent-surface sorting, optimal sorting of state-changes, turning on-off states, FP’s, VP’s and all the other tedious, boring but unfortunately currently required micro-management we are forced to do on the CPU.

The only obstacle I currently see in this area, is that I’m not aware of any open project (research or otherwise) seriously looking into extending OpenGL in these directions, be it under the OpenGL name or something else - at least not with an eye towards what servers (what most people today think of as GPU’s) might be able to do and therefore off-load the client (host CPU).

tamlin

would you be so kind to define what do you mean under “software” and “hardware”? For me, software means general processing and hardware - hard-wired functionality. Software is slow but flexible, hardware is fast but, well, hard-wired :slight_smile:

A hardware unit will ALWAYS be faster on simple tasks then a software unit.

The digital processing envolves to get more performance and more flexibility. Flexibility: ability to constrol the execution. Performance: implementing time-critical tasks in hardware. If we combine it, we get a new solution which can’t be considered neither “software” nor “hardware” based, if we use the established terminology. So for me, such discussions don’t make any sence. We can make our prognoses how future GPU’s may look like, but please don’t use current terminology as it is! I’m shure it won’t be compartable with future solutions.

if you want my two cents, i can tell you exactly where hardware will likely go.

its very simple really.

take a curve. there are an infinite number of points on the curve. in a few years to a decade polygons will be an ancient paradigm. a static polygon model is basicly preprocessed points on the curve to a preset resolution. for a mildly realistic scene which is more or less freely explorable, precomputing all the per vertex attributes for the scene is pure insanity, much less managing the multi resolution meshes which would be required for any reasonable degree realistic resolution.

therefore as a pure matter of necessity, like it or not, future hardware will except uloaded parametric geometry, nurbs or subdivision surfaces or whatever. from there you pretty much won’t be able to touch the surfaces directly, because the hardware will have to manage the tesselation of the surface with respect to an evolving view frustum.

because parametric spaces are not uniform, it is likely that you would define a 2D mesh (yes a 2D mesh in the parametric U and V coordinate system) to serve as the base geometry which the hardware will subdivide. (other subdivision processes could be used, but this is the most economical model).

i’m working on all of this, and for the last week have been working on a lowlevel API which should be immediately comfortable for opengl users. i’ve built the system and taken as much advantage of current hardware as is possible, and quite effectively as it turns out. the system itself though is completely pathalogical and would be better implimented entirely in hardware.

beyond this for limited applications (especially hard science) i would predict a finite element physics processor tightly coupled with the graphics processor. i predict the definition of a ‘video game’ will converge on ultra realistic virtual reality, and processors will be taylored specificly to this need. games which do not conform within the constraints of something similar to ‘real’ physics, would have to take a backseat performance wise and fall back heavilly on software. but people with a taste for ‘unrealistic’ games probably wouldn’t care about this. the heaviest puller for this technology will be garment simulation.

though i prefer not to talk about it, i’ve also done extensive work in the field of real-time anatomical simulation (down to the last pieces of bone and sinue) and i can tell you that a whole slieu of specialized parallel asynchronous pipelined hardware will develop around heuristical models of anatomical simulation. of course other sorts of simulation will get a free ride, but anatomical simulation will drive everything, just as video games drive computer science.

EDIT: just to try to reconcile the last two paragraphs, trying to do anatomical simulation on a finite element processor is just insane power wise for vr type applications where people would expect to play a major role and in vast quantities. the first games to see this technology will be one on one fighters with a lot of skin (‘fist of the north star’ would be an awesome candidate). as far as i know i’m the only person who has made real headway into this field. ie. musculographics.com (not me)… anyone with money and an upstanding ethical history who wants to talk about it can just PM me.

just tryin to blow your mind!

sincerely,

michael

In addition to all what’ve been said, I think we’re going to see more and more rendering algorithms that are now considered simple, but slow and computationally expensive, to be implemented in hardware (or programmed hardware) and run in real time, for example raytracing + photon mapping.
Remember Z-buffer, it was invented in 1979 as a simple visibility determining alg, and practically useless because of its comp. needs, and now every one has it in hardware…

honestly i imagine that in the not too far flung future there probably won’t really be any such thing as the graphics programmer. everything will be done on dedicated hardware, and the basic workflow will pretty much become standardized.

the graphics programmer would probably have little more to do than pick from a catolog of programmable shaders and configure the hardware.

beyond that everything will probably just be event management, and i wouldn’t categorize that as graphics programming.

such a future might appear void of creativity. but the truth of the matter is everything will converge on optimal solutions, and the solutions will be built into dedicated hardware.

the trick will probably only be a matter of society not forgetting how the hardware even works once evolution becomes only a matter of quantity rather than quality.

constrained AI / natural language processing and database management will probably take over the curiosity of hobbiest. graphics and physical simulation will likely appear quite quaint and formalized in comparisson in not too long.

So to sum up what was said, future of OpenGL graphics is basically 100% hardware Renderman (implicit surfaces done with micropolygons+complex shaders+some ray tracing capabilities) mixed with physics.

Ok, that is pretty obvious.

Originally posted by ZbuffeR:
[b]So to sum up what was said, future of OpenGL graphics is basically 100% hardware Renderman (implicit surfaces done with micropolygons+complex shaders+some ray tracing capabilities) mixed with physics.

Ok, that is pretty obvious.[/b]
if i post too much dorbie will jump my tail, but that is basicly correct. only thing i would dispute is micropolygons.

i would predict that fpo’s relief mapping approach will become standardized. polygons will be considered hulls, and will span about 3 to 8 pixels. the pixels inside the polygon hulls will be displaced to achieve perfect curvature and detail mapping.

it doesn’t make practical since for polygons to be on average one pixel wide as long as you rasterizing triangles and not raytracing. doing so pretty much defeats the purpose of using polygons in or at least makes it look pretty ironic and foolish.

the future of global raytracing for real time apps is a bit more complicated though. i personally can’t offer up any predictions. much more realistic than classical hardware raytracing is hardware photon mapping. hardware likes straight forward procedures.

I predict that soon there will be apocalyptic chemical/bio/nano/antimatter/nuclear war, or alien invasion, or asteroid/comet impact, or global warming caused weather disasters, or worldwide nazi/communist/fundamentalist revolution, or second Christ/Satan/Cthulhu coming.

Therefore, we should not be worried about having to change our profession 10 years before retirement because of hardware evolution making our graphics programming m4D $k|11z hopelessly obsolete.