Moving back to software - when?

Considering a game featuring Cthulhu was basically what got so called “consumer-level” hardware OpenGL support going (even if initially in form of “miniport drivers”), and it seems we’ve now reached a period of more-or-less maturity/stagnation when it comes to hardware progress, would the return of Cthulhu complete the circle or would the minions simply chant “death to polygons”? :slight_smile:

sorry if i suggested graphics programmers will be out of work in 10 years. but ever since the dawn of hardware graphics the domain of the graphics programmer has been gradually consumed by the hardware. there is no reason to believe this will not continue until there is nothing left. (especially for vr which mimics reality to the last detail and has no explicit rules such as ‘life meters’)

its just a matter of how exponential the assimilation process will be at its limits.

there is no reason to believe though that opengl will not continue to be backwards compatible at least to its dying days, heaven forbid, for anyone looking for nostalgia.

i would say that graphics programmers skills will be marketable for at least the next 30 years. mind you i say this with a sort of vane disassociation myself as i am no less exempt from the rule and graphics programming has and will continue to consume a great deal of my hours and in the end at best all i will probably have to say for it is that if i’m lucky, ‘i helped lay the foundation of hardware graphics’.

graphics programmers will still likely always have a place in building modeling environments. that will probably be their only saving grace and application for their knowledge.

schools will teach the history of graphics rather than applicable technique, and the number of people who really understand it at the lowest level will dwindle probably all the way down to a select few a lot like the number of people who actually really know assembly language. if not for SSE and programmable gpu’s assembly would probably be an ancient unregarded art form save for compiler programmers.

I disagree, I’ve been programming graphics for at least the last 15 years. There is more varied and more interesting programming now than ever before. The fixed function pipeline has been replaced with programmable hardware.

If you arrived late and were using PCs exclusively during some of that time you might have missed what was going on and thought those crappy early software renderers were where the action was in graphics programming. You’d have been wrong.

If there’s a trend it’s the opposite of what has been claimed.

There’s also a lot of software work required to design the hardware, for every major 3D engine written on a PC at any given time (say in the Quake era) there’s probably 3 graphics hardware implementations being developed today (a wild guess). All of those efforts keep teams of graphics systems software engineers busy.

Improved performance also makes more algorithms feasible. We’re now implementing rendering algorithms that weren’t even attempted in the past because they were obviously impractical. That in conjunction with more programmable hardware means a lot of interesting specialized software development getting done.

Well, soon enough the graphics are going to be soo good, that they will seem perfectly real to us…and at that point if you doubled the speed of the graphics hardware, and (for the sake of argument) doubled the poly count…we wouldn’t notice.

And then, when the CPUs get fast enough for Software rendering to be this fast, i think we will change back to Software Rendering.

Its just more flexable, compatible, and programming your own Software Rasterizer separates the Men from the boys :stuck_out_tongue: Although, it will be a fair few years away.

But hardware would probably focus towards some other technique, like Ray Tracing.

-Twixn-

Thanks everyone for discussion,

I was considering if there will ever be established any standard for a really high-level graphics, which would let forget us about all that relatively ‘low-level’ stuff we do now. Well, maybe not quite forget, but not to have to handle OpenGL state machine, care for its optimizations etc… There are so many different aspects of 3d graphics that make it hard to believe that such high-level API could happen. What kind of higher level API than OpenGL would suit everyone and every possible use case? Hard to think of such API, so I think we will stick with OpenGL level API for a long time yet.

Concerning ray-tracing and other expensive 3d graphics techniques - their biggest disadvantage is being not scalable. The quality of real-time ray-tracers is always way behind compared to what we can do using ‘faked’ polygonal OpenGL like graphics. We still need A LOT more horse power to handle ray-tracers, not to mention global illumination algorithms. They’re not quite the same as today’s 3d graphics seen 15 years ago. Z-buffering even though impractical these 15 years ago, was surely reachable in few years because its cost is constant, while things like global illumination cost exponentially more. Fortunately, as said before, our CPUs’ speed is progressing exponentially too.

Last thought, concerning perfectly real environments - what I would call some kind of milestone in 3d graphics, is when we will be able to make so real humans that one wouldn’t be able to tell if they’re faked or not.

in defense to some stuff i said, the definition of what a graphics programmer ‘does’ as ever will continue to evolve.

as for programming everything in software this will never be the way vr will be done, because it is simply a supreme waste of energy… your machine will require much more energy and cooling and your utilities bill will go through the roof.

ideally you want to do everything possible in hardware. shaders are essentially software if there is such a thing as software. but much of what a contemporary graphics card does is still extremely static stuff, as it should be.

even as for shaders, eventually about probably 3 shaders will emerge as the standard for totally realistic virtual reality and other popular shading models. these shaders would probably be mapped directly into hardware due to their extensive popularity.

Originally posted by MickeyMouse:
[b]Thanks everyone for discussion,

Last thought, concerning perfectly real environments - what I would call some kind of milestone in 3d graphics, is when we will be able make so real humans that one wouldn’t be able to tell if they’re faked or not.[/b]
the only thing which makes human models appear extremely fake at this point is the fact that the anatomy is not being simulated. the sooner we can begin to distance ourselves from weighted transform ‘skinning’ the sooner familiar anatomical models will begin to appear believable.

as a classicly trained artist computer graphics lack of respect for anatomical form is my major dissappointment with the field. as a result this happens to be my primary expertise. i could say a lot about it, but this is probably not an appropriate forum. anatomical simulation is actually the holy grail of computer science right now due mostly to its potentially scary cybernetics applications… however public research in the field appears to be dead. it was real strong in the 90s but all of the heavily funded projects fell through and the field is dead now. keep in mind this is real simulation, and not utter hacks like you see being touted in smokey scenes in ilm type cgi offerings.

Originally posted by Twixn:
Well, soon enough the graphics are going to be soo good, that they will seem perfectly real to us…and at that point if you doubled the speed of the graphics hardware, and (for the sake of argument) doubled the poly count…we wouldn’t notice.
oh no, you’ll notice for a long time. at some point it is true that the techniques will probably level… but quantity will still be pushed for a long time, just like cpu technology hasn’t really changed but the numbers of transistors don’t stop. believe me, the amount of energy needed to spit out the number of data points in a typical forest with underbrush is just massive. the realism won’t change, but the amout of data in the environment will continue to go up just like cpu speeds have.

And then, when the CPUs get fast enough for Software rendering to be this fast, i think we will change back to Software Rendering.
the freedom might be interesting, and emulation more feasible, but mainstream will never go back to software. if you ever wanted to say watch a movie from the inside like a ghost and be free to move around in a photo realistic environment undiscernable from your own. this will almost definately all be done entirely in hardware. software will be managing massive event scheduling and AI demands. executive AI is a software task, graphics is pure hardware to the bone at the end of the day.

Its just more flexable, compatible, and programming your own Software Rasterizer separates the Men from the boys :stuck_out_tongue: Although, it will be a fair few years away.
most people probably have better things to do than build software renderers. i’ve built the beginnings of a software render which is specially designed for drawing directly to and from locally partitioned disk space, and believe me it is a depressing, time consuming, and boring chore even if you know exactly what you are doing.

But hardware would probably focus towards some other technique, like Ray Tracing.
high level ray tracing is especially poorly fit for hardware, especially for instance transmitted shadows. low level ray tracing would require probably only hardware collision processing. photon mapping is much more straight forward, merely an extension of a collision system, and would allow for something very interesting which is missing from graphics, namely volumetric lighting. in the future offline cgi rendering will probably be little more than just setting the exposure level of your shot like shooting with a camera, then have hardware photons shot at it until you are happy with the photon spread of the shot.

finally just a last tacked on comment, there is probably room for hardware voxel systems as well. already the medical industry is pushing this, but it will probably go further into consumer hardware once people are able to get over polygons as the ‘only way’. some awesome effects can be better modeled with voxels. it will be interesting as well to see if texture maps ever go voxel. think paint and grime that can actually be scratched off in 3D, or a layer of snow or mud to walk through which is actually just a dynamic voxel texture. voxels are basicly all about 3D texture compression.
great long winded thread.

sincerely,

michael

EDIT: there is this:

http://www.ageia.com/technology.html

first i’ve heard of it. the world just got more crazier. i wonder if i can keep up.

will keep looking for architecture details. any ideas?

i’ve built the beginnings of a software render which is specially designed for drawing directly to and from locally partitioned disk space, and believe me it is a depressing, time consuming, and boring chore even if you know exactly what you are doing.
I have also built my own software renderer, i found it interesting, and i think its made me a better programmer…Thats one reason why i like them so much, as i’ve made my own. Im even porting my latest project (Subliminal, for those who helped with my ATI problems) to it for fun :stuck_out_tongue: . Thats why i joked about the manliness of it, as i have done it myself.

Maybe we will never go back to software, but you have to take into account that soon enough the CPU will be too powerful just to do AI. And for the end consumer it may cheaper just to tone down the graphics and skip the Graphics card altogether. But, its just a thought.

photon mapping is much more straight forward,
Ray Tracing was only an example, i was not being specific…When Hardware is able to move onto more advanced techniques than rasterization it will. And it doesnt matter how ill suited a technique is, there will always be someone who will do it anyway.

I’ve heard about the PPU a while ago (havent looked at the link, but im assuming because of AGIEA), as for the Architecture…it will most likely be a GPU similar chip that outputs to matricies, even directly to the GPU in the form of transformation matricies, or even on the same card for added fun :stuck_out_tongue: .

-Twixn-

PhysX was announced a couple of months ago but there isn’t any detail other than the API it will use and that a couple of game companies support them.

For the near future, I think it will be just incremental steps like more fillrate, more RAM, more shader power, a couple of parts becoming programmable.
I think IHV’s want to implement things that are straigforward enough to lead to a product within a year.

Someone once said that this PPU based card idea is dumb and if the PPU does prove itself to be valuable, it belongs on the GPU.
It would be interested if the PPU can compute transform matrices for the objects and feed them directly to the GPU.
Or how about a PPU that can simulate a soft body like muscle and tissue. It computes the new polygons, write them to a VBO and the GPU picks it up and renders immediatly.
etc etc etc.

[edit]removed bile for biles sake[/edit]
Michagl, as dorbie has just pointed out, over the last 5 years the hardware vendors have invested huge amounts of money creating something called a ‘programmable pipeline’.
They had two options: 1) continue extending the fixed functionality (ie. hard-wired paths) to accomodate more elaborate and realistic shading models, or 2) create an open architecture, whereby programmers (or ‘the unemployed’, according to you) could create their own shading models, more complex, less complex, more surreal, more stylistic, whatever. Heck, they could even implement their own ray tracers if they wanted.
What you’re suggesting is that after all this effort, the vendors are going to close up shop again, hard-wire everything, and give the application programmer a fixed choice of shading models. Doubt it. There will be standardised shader configurations built on top of programmable hardware (like there is now), but there is no need to hard-wire it (and never will be), just as you can’t buy a hard-wired database chip.
BTW, I think someone should break it to you at some point: you are not the the only one to discover parametric surfaces, you’re not the only one to realise their potential in saving bandwidth and storage, you’re not the only one to try to construct a standardised, opengl-like, API for parametric surfaces - you just need to look further than the glu library, that’s all.
You are not the saviour of CGI. We have not been awaiting your arrival.

werent curves done in consumer hardware as far back as the gf3 (never really took off)
even with 3d modelling theyre not used exclusively, in the nearterm(3-10 years) i believe models will be mostly polygons with *displacement mapping, far more easier to control for the artists that working with minute curves to express minature detail
*either true or some quasi method eg fpo’s mentioned on these forums before

briefly, i’ve been looking over the ageia Novodex SDK… its all ‘virtual’ c++… hopefully an opengl type API will be developed. and yeah, it would make more sensee to put the ppu on the same board/chip with the gpu, but that will probably take at least a year to get started if the ppu proves popular. as for the architecture, there is no telling how it works just looking at the API (or is it the SDK?). the SDK might be all software, but it is extremely complex. i wonder if the hardware is sophisticated enough to manage its own collision ‘worlds’ internally? there is probably some embedded software running on the hardware. i wouldn’t know.


knackered, why must you always get personal. yes programability will always be there. but if there is a single massive shader used 99.9% of the time then it should be hardwired. and yes expensive hardware SGInfinity or something does do NURBS models, but i believe they are done with brute force per screen tesselation. anyhow i’m not advocating anything… just pointing out that polygons alone can never produce photo-realistic environments. and though doing the sampling work with the cpu is quite possible, it would make more sense to do it directly on the graphics hardware in parallel.

This post became way larger than I had hoped for (seems I have gift for that). I still wanted to share it, as I came to realize some of the ideas might actually lead to something.

Obviously vertices and polygons are here to stay. I think we all know that. Even if something else was/is added, they are still the most obvious, straight-forward and precise instrument to visualize some things.

What they however not are, is a be-all end-all solution to all visualization - at least not storage of represenations, and therefore obviously is a impedance mismatch (to use an EE term).

Voxels (the real kind) have been mentioned, where polygons are a complete misfit but 3D textures are a perfect fit for uniformly distributed sample points, which in turn is a 1:1 match for some medical imaging. Due to current hardware restrictions, 3D textures like much else require artificial paritioning of data to fit the hardware. As partitioning has always been a problem in CS, for just about any field we can think of, I don’t expect that to change any time soon - if ever.

Polygon-based models when viewed from a distance are often LODed. Obviously, since it makes no sense having a 100k poly model rendered in all its glory, covering 4 screen pixels. But that also display a class of problems where we both have a mismatch in representation, and where the server potentially could have a bit more flexibility, to possibly do LODing on the server.

Large terrains are also a class of problems, and one I myself have a soft spot for, where the polygon approach is merely an approximation - an approximation that in areas even today has reached its limit. Obviously using a simple 2D heightmap as has been suggested here, or a parametric only geometry, would be too limiting.

What possibly could work, and I’m going out on a limb here so please don’t flame me too bad if this is indeed insanity :slight_smile: , could be something like a more generic program run once for each frame. Perhaps we could have many of them, and run them at different points for different purposes, but I’m thinking of a fairly low-frequency called type of programs. Perhaps callable as display lists, but in fact full server-side programs (in a language yet to be invented).

These programs could as input use … just about anything. Say texture objects, VBO, more-or-less free-form data much as we can feed gl*Pointer some data with a stride, and what’s in between the common data and the stride could be used here, or just about any free-form data the programmer on the client decides to send to the server program. Anyway, that’s really an unimportant detail at this point.

What could be the important detail, is what this program could generate. What I’m thinking of could possibly do the same things as any program could do today using the OpenGL API, so long as it (obviously) only affected server-side state and data. It could also be that due to its infrequent calling, it could even be limited to essentially immediate gl calls. I haven’t considered that part at all.

But let’s explore what such an app could perhaps do, and how more visualization work could be shifted to the server.

Say we are to write a 3D medical imaging program, and have voxels as input. Let’s say we have 1536^3 sample points. We break them into 9 3D textures, which are uploaded to the server. As I have exactly zero experience in this field, I’m again going out on a limb here. Say we have the dumbest of dumb programs to visualize this, and we create cubes, one for each voxel, meaning roughly 3 billion cubes. No real program would obviously do this, but lets say we have a 3:rd grade VB programmer creating this. :slight_smile:

Now, instead of uploading vertices, indices, normals and texture coordinates 'til sunday, we could upload a small program we have written to the server, to generate all this data right where it’s used. Say we uploaded a small parameter buffer saying what texture objects to use, a near clip-plane, and some additional stuff like scaling, rotation and stuff, and that program generated those cubes all on the server.

Stupid example? OK, maybe it was. Say we have a heightmapped terrain. We have a bit of texture splatting, a parametric LOD, some terrain features that are to augment the heightmap and so. Upload program, tell it what 2D texture(s) to generate geometry and stuff from, an extra buffer containing data for e.g. splatting, and have it all run on the server. After that, more volatile objects could be added by the client, such as vegetation, CPU-calculated objects (e.g. physics affected) and so on.

I expect the programs would have to have at least some sort of “scratch” area, memory to be used for temporary storage of stuff - but I also envision them to be able to, on the server, modify already uploaded VBO’s, augment vertex blending, modify light params and… basically modify most server states and data I can do from the client side today - in addition to “create” geometry.

Currently I think something like this could be quite possible. There would be no undue restrictions on what can be done. No hard-defined “You use a 2D heightmap, period!”, and also no “parametric-only surfaces”. Geometry generation, incl. LODing, would be all up to the server-side program to decide. Input can be from any server accessible source, and output would be geometry with attached states and data.

I don’t know if I managed to explain this idea very well, but I currently see it as this could fit very well into the development and evolution of gfx hardware. It would use existing inputs, and generate a finite class of outputs.

If I somehow did manage to get the basic idea(s) through, what do you think? Insanity? Possible? Plausible? Perhaps even a good idea?

>> a single massive shader used 99.9% of the time
You mean, within a single game like in Doom3 ? Or for every new game ? Nobody wants its game look like all the other ones in the market.

Even on a standard outdoor scene, you will need special shaders and tricks for water, glass, rock, dust, grass, stained steel, wet wood, skin, fur, sky, sun, clouds …

Look at CG packages like 3dsmax/may etc, there is a lot of work on materials, not just setting specular exponent and diffuse texture for a blinn shader.
This is the whole point of Renderman shaders, they can grow complex enough to need a graphic editor to create them. Mixing procedural (programmer’s job) and textures (artist’s job) is important.

Originally posted by ZbuffeR:
[b]>> a single massive shader used 99.9% of the time
You mean, within a single game like in Doom3 ? Or for every new game ? Nobody wants its game look like all the other ones in the market.

Even on a standard outdoor scene, you will need special shaders and tricks for water, glass, rock, dust, grass, stained steel, wet wood, skin, fur, sky, sun, clouds …

Look at CG packages like 3dsmax/may etc, there is a lot of work on materials, not just setting specular exponent and diffuse texture for a blinn shader.
This is the whole point of Renderman shaders, they can grow complex enough to need a graphic editor to create them. Mixing procedural (programmer’s job) and textures (artist’s job) is important.[/b]
of course no ‘single’ shader would do, i was just playing devils advocate. but for instance total reality (as we know it) simulation, would probably carve out the largest niche in the future of graphics… well at least until people get bored with it.

the truth is there are about 3 major kinds of shaders for photo realistic images (or will be in the future). the standard smooth surface shader, displaced surface shader, and the transmissive volumetric shader. you can pretty much bet that within those major groups a few conditional branches would be about enough to accomodate most physical phenomenon in a realistic scene.

at the end of the day though, for truely realisticly lit scenes, i would bet that a real-time photon mapper would be used. basicly it is nothing more than shooting energy into the scene and recording its collisions in a photon buffer, then rasterizing the buffer at the end. its just so straight forward. you would tweak options like how many photons and how much residual energy to use to hide gaps in the fill due to limited photons. from there its just a bunch of photon collider units running in parallel. light is awesome, it doesn’t interact with itself, only matter, that is to say there is zero interdependance. you could really optimize it by treating each photon as a little view frustum and tesselate the collision geometry based on what the photons see and don’t see. no need to waste resources in dark regions, awesome for night time city rendering.

Originally posted by tamlin:
This post became way larger than I had hoped for (seems I have gift for that)…
wont quote the whole thing :stuck_out_tongue:
yes what you are proposing here is imo certainly something the near future will bring since it is a logical extension to the (rather restrictively) programmable rendering pipeline of nowadays. i think there are already discussions about adding programs that will be able to spawn vertices instead of just altering them and in further enhancements those programs might even get as complex as you describe in your proposition, being able to access multiple types of data and even writing it.

Originally posted by michagl:
knackered, why must you always get personal.
Because you’re a person, duh!

i don’t think that the PPU introduced by ageia will be a great success. of course game developers will love it, because ageia will provide a library with their PPU which allows a programmer to simulate real physics without knowing very much about it.

but the market for people who are willing to spend 50-100 dollars, euros or whatever on a card which gives their 2 or 3 favourite games a better performance doesn’t seem very big to me.

Originally posted by michagl:
i wonder if the hardware is sophisticated enough to manage its own collision ‘worlds’ internally? there is probably some embedded software running on the hardware. i wouldn’t know.

I don’t really know the details but I imagine the board will have lots of onboard RAM to store the world and to create a scratch memory area.

Since this chip is suppose to be handle not only rigid bodies but soft bodies, fluids, cloth and hair, perhaps there is a need to download the newly computed geometry.

I’m not sure how soft bodies and cloth will be handled. cloths can be done with NURBS. Hair can be done by offset values.

the major problem with soft bodies is not how they are internally stored .

the behaviour of soft bodies can be simulated using the finite element method, for example. a complex structure is divided into triangles (or quads), for which a stiffness matrix can be easily calculated. this stiffness matrix can be used to calculate the deformation, which results when external forces are applied on the structure- usually using a time integration algorithm

the problem is that you may have to use very little time steps to keep the time integration stable. maybe 1/1000th second or less. a PPU which is optimised for matrix operations could be much faster than a normal CPU for this special task.

imho it does not make much sense to make the GPU do that work. if you want to display 30 frames/sec, you may have to compute 10-100 integration steps between each frame.