Physics

Hi,

With NVIDIA and ATI beginning to ship physics solutions, isn’t it time to look at extending OpenGL to include physics …

i.e calls like

glForce(…)
glTorgue(…)
glParticle(…)

Is anything like that under consideration somewhere.

GL is a graphic library. So, but a turn right 90° this won’t happen.

Nevertheless, can you point me to docs about what you said about ATI and nVidia ?

If that will come, I guess an OpenPL should be well.

nVidia was working on an API. The idea is to do certain physics simlation on the GPU. Things that are strickly cosmetic like smoke and cloth.

I read about on some tech sites. Extremetech.com, rage3d.com, … but no real detail, just short articles.

There’s something about all this that makes me feel a little oogy. Maybe if they generalized the GPU into a general processor farm or just called it something else. I like the idea of phast fysics, though.

P.S. I kinda like cell technology for this sort of thing. But PC architecture evolution is WAY too slow to think in cell terms, I reckon. Perhaps GPUs could be wired to do the something similar… :frowning:

Originally posted by Leghorn:
There’s something about all this that makes me feel a little oogy. Maybe if they generalized the GPU into a general processor farm or just called it something else. I like the idea of phast fysics, though.

nah won’t happen, the GPU is fast because it’s a gather processor instead of a scatter processor, and that will probably never change (dramaticaly at least).
GPGPU FX physics is actuarly not that hard to understand or implement, and the great thing is that you can use the data directly with inctancing without having to pass trough the CPU.
Just hope they implement openGL inctancing soon, preferably before i write the GPGPU FX physics tutorial on my site.

inctancing
What is that and how does it involve using data directly without having to pass through the CPU?

it is instancing. please :slight_smile:

it is instancing. please
That doesn’t explain how processed fragments get pumped back through the GPU (which is, as I understand it, what he is suggesting happens).

You’ll still need render to vertex array to get the result back into the front of the pipeline (of course FBO->PBO->VBO works, too).

But without instancing, you just get one vertex per written fragment. That’s enough for particle systems (with point sprites), but perhaps I want to move more complex objects with my GPU physics. For that instancing would come in handy, just define one vertex array for the complex geometry and one vertex array for the positions of the instances (that would be the render target of the physics pass).

For that instancing would come in handy, just define one vertex array for the complex geometry and one vertex array for the positions of the instances (that would be the render target of the physics pass).
Even in D3D, instancing has not proven efficient for large geometry that I would consider using the term “complex” for. Small-ish geometry (on the order of 100 polys) works, but larger geometry doesn’t offer a performance win.

That’s not to say that it’s a bad idea to have it, particularly if the functionality is exposed a bit more generically than D3D does it.

Uh, what kind of physics are we talking about here? Smoke, waving grass blades, fluid flow? Just want to be sure we’re not delving into complex rigid body physics; solving big Jacobians and whatnot.

Actually, the more I think about it the more it makes sense to have graphics and physics on the same data “ring”, at least where the procedural stuff is concerned. Whether that ultimately means “physics” on a GPU is a good idea is still a matter for debate, but what the heck.

I’m not talking about performance win through instancing, I’m talking about not needing to read back data. The situation is different here than in the standard “instancing vs. no instancing” discussion, because we don’t have the data on the CPU, so immediate mode is useless.

I was thinking along the lines of a particle system with about 20-30 polygons per particle (think small objects in a storm). First I have a standard particle system pass on the GPU that produces an array of particle positions.

Then I want to render some objects translated to the given positions. The easiest approach without readback that I can think of is render to vertex array + instancing. That way the GPU has to do no additional work besides the overhead of instancing itself.

The alternative would be to do an additional pass that gets the detail geometry as texture and expands the vertex array. I have not tested this, so I’m not sure what the performance impact of this is. Propably not much, but it still is a little work compared to no work :wink:

First I have a standard particle system pass on the GPU that produces an array of particle positions.
GPU-based particle physics is not “standard”. It’s doable, but it’s far from being the norm.

I’ve never really understood this need to put physics on the GPU. I mean, it’s getting to the point nowadays where dual-core CPUs are standard. Time will come when a single-core CPU is as much a relic as a CPU with no floating-point unit. It just seems so much easier to me to spawn a thread for graphics-related tasks, and then have another thread for AI/animation/physics.

That being said, I can see where instancing would be of value for this kind of operation.

The other problem here is that of world interaction (assuming we’re talking about games/simulations). Complex world interaction is going to require dealing with on the CPU–that or some elaborate scene management delivery system for the GPU. No biggie for small particle systems and such, I suppose. But it just doesn’t seem to generalize nicely. This kind of thing certainly has its place, though, especially for all those gratuitous effects that just can’t be done as effectively any other way :wink:

Of course, writing a physics engine doesn’t make sense on the GPU, because I need the data back on the CPU anyway.

I’m talking about eye-candy physics. Putting this on the GPU doesn’t only save computation time, but also bandwidth. If I simulate the particle system on the CPU, I still have to upload the particle positions every frame.

I take it you have all heard of Ageia’s PPU, the Physics Processing Unit?

It is going to take the strain away from both the CPU and the GPU. Not to mention it comes with it’s own API, which is cross platform and simple to implement.

Anyway, I don’t see OpenGL going away from what it is, graphics. And I like it that way. If you want a physics library that codes like OpenGL, try to get em to make OpenPL. They already have OpenAL for sound, why not OpenPL for physics, but then that wouldn’t be suitable for this forum :wink:

Even if you were to implement the physics through the GPU 100%, it still wouldn’t be suited to OpenGL, OpenGL being totally backward compatible is already cluttered enough, but to enclose a whole new API within it is just mad.

Just my 2cents.

I take it you have all heard of Ageia’s PPU, the Physics Processing Unit?
I take it, you’ve also heard that nobody cares about it, and GPU-based physics combined with multi-core CPUs has a better-than-adverage chance of killing it before it gets off the ground.

Some benchmarks I saw show that Ageia card gives you the same performance or less than the software Novodex solution. For those who don’t know, the Novodex API is used for Ageia. If you don’t have the Ageia card, then it run in software.
Maybe things will improve next year.

If you want a Open Physics Library, there is ODE (Open Dynamics Engine).
Who uses ODE? http://ode.org/users.html
Some people are saying Bloodrayne 2 is a nice game. It is using ODE.

Maybe things will improve next year.
And still no one will care. There will be even more dual-core CPUs next year.

The real problem with physics chips is that they’re physics chips; you can’t use them for anything except games.

A GPU is still a graphics card; it’s a basic necessity for running a computer. A physics card never will be.

Real physicists are so far beyond physics chips that it’s not funny; they wouldn’t be able to get by on 32-bit floats.

That’s why the GPU+dual-core approach for physics is ultimately superior. Both of them have alternate, non-game uses, so you can have some attachment to a market that doesn’t care about gaming performance. A market that gets low-end GPUs because they game occassionally, and want a dual-core CPU for application performance. For them, it’s a nice bonus if this combination also nets them nice in-game physics.

i agree with korval. i looked around and found physx boards for about 230 euros. if you spend those 230,- on your cpu, you will get an athlon64 x2 4200+ instead of an athlon64 3000+.

i can only speak for myself, but i instead of buying an extra component to take the strain away from my cpu, i would rather buy a second cpu to take away the strain from my first cpu.