What's next ?

What is going to happen now that all of the major physics companies have been bought out? Are we going to see on extension to OpenGL called OpenPY (for physics)? I assume that the major players who bought out Havoc and Ageia are going to release APIs to whatever Physics/Graphics card they going to release. What is that going to look like? Does anyone know of the any discussion going on about a standard? I hope Intel and Nvida are going to get along. And, where is ATI? I am afraid that the biggest thread to OpenGL is going to be the upcoming fight on the Physics API.

Anyways, can anyone enlighten me about the Physics API.

Cheers
J

I think that physics is physics and graphics is graphics. Personally, I can’t see how these two are really competitors. Surely, many applications use both physics and graphics to create realistic virtual environments, but that does not mean that the two realms are inseparable.

It would be nice if there was an OpenPY, and there have even been attempts to push physics calculations onto GPUs (not to mention PPUs). However, I think the way companies like Ageia and Havoc (or whoever bought them) do business is by licensing their engine, so there might be some constraints as to what they can release without damaging business.

I am not sure I understand what you mean by “threat to OpenGL”. Please explain!

Ageia and Havoc will both continue working on their proprietary engines. Ageia will implement support for nVidia chips, such that nVidia can continue spreading fud about how graphics cards are great physics-accelerators. Havoc will continue to implement their engine for multi-core processors, certainly with an eye onto Larrabee, such that Intel has a more solid basis for argumentation, when presenting their many-core processors.

Some companies will continue developing their inhouse physics-engines (id-Software?), but most will either support Havoc or Ageia, or even both (like the Unreal Engine 3).

I don’t see any “standard” emerging in the next few years. Their is no “standard”-feature-set, their is no standard way to implement it. And, looking at Havoc, it is not only about physics-simulation itself, but also about software that is built on top of it (like “Havoc Behavior”).

MAYBE, in a few years, MS will make something like DirectPhysics, but today it is too early to try to create a standard, first one needs wait and see how the current development continues and which directions it takes.

Jan.

Yea, exactly, though i do expect to see some kind of mini standard API coming from the likes of NVIDIA, Ageia and Intel, supporting some kind of basic ppu instructions.

Something like OpenPY lies quite far in the future, i say at least 3-6 years, and what has to happen before that is the addition of some kind of hardware physics processing in every computer, be it on the GPU or a dedicated processor.
And so far only a few computers has something like that, we will have to see what AMD/ATI does, but yes it’s a few years off.

Though personally i would like openPY, especially if it has good bindings and compatibility to openGL so you don’t have to waste time translating the physics output into something openGL likes.

Since I never had any contact to physics hardware, can someone tell me, what I could do with it? From the developers point of view, of course.

CatDog

There is/was only the PhysX chip, and it is not open, so you can only use it through Ageia’s API, which limits its use to physics-simulations.

Ageia never disclosed how that chip actually works and what it makes it so well suited for physics-simulations. Obviously it is good at vector-calculations, and as far as i heard physics-simulations need to evaluate a lot of branches (though i don’t know whether that is true). That was one argument, why graphics-chips suck at physics-simulations, because they have very deep pipelines, whereas a physics-engine is said to profit from very short pipelines. But take this information not so serious, it is only what i heard, it does not need to be true.

One other thing, that was speculated about very often, is, that Ageia’s real plan was to get momentum through the gaming-industry, by providing a physics-simulation, but later they planned to sell their hardware for super-computers, where it was supposed to do vector-calculations for weather-forecasts, etc. Many web-sites claim ATI and nVidia have the same plan, to sell their GPUs in big numbers for super-computers, as a better vector-processor, than CPUs. That is why they are so interested in the GPGPU projects.

Honestly, i doubt it will turn out as ATI and nVidia expect, because Intel is much too fast to integrate even better vector-processing capabilities and many cores into the near future CPUs.

Jan.

Very interesting, thanks!

CatDog

Something like OpenPY lies quite far in the future, i say at least 3-6 years, and what has to happen before that is the addition of some kind of hardware physics processing in every computer, be it on the GPU or a dedicated processor.

OpenGL has existed with and without any actually hardware support of it. Look at Mesa. A standard does not necessarily imply hardware support.

Ageia will implement support for nVidia chips, such that nVidia can continue spreading fud about how graphics cards are great physics-accelerators.

Graphics cards are great physics-accelerators, as well as many other scientific computations. There is reason CUDA exists.

I would be it likely that the Ageia technology gets integrated into NV chips (like 3dfx did) and the resulting APIs will either be merged into CUDA or there will exist a wrapper on top of that specifically for physics. Nvidia sells hardware not physics APIs.

I also don’t think OpenPY would have much of a market given the existence of Havoc, Ageia, Bullet, and ODE among others. There wouldn’t be much point for it.

Yes, but openGL was different as the basis for the standard was already there, polygons and rasterized rendering has existed for ages in some form or another since pong.
With physics the market needs to solidify a bit first as there are many ways to do one thing, there are many levels on which you can write a physics engine, maybe all that you need is simple gravity and collisions, or is general relativity the thing for you.
Similarly, what method should be used for collisions and at what level should information be returned at (just a list of the closest collisions or maybe the final coordinates for the entire simulation).
Should AI, IK and animation be included as in some parts they are closely related.

These are all hard questions to answer, but with the introduction of hardware physics all questions of uniformity needs to be addressed, and that ultimately leads to a standard.

I would argue that given the capability to perform physics calculations has existed since (and before) the birth of computers and no API standard has evolved that we are unlikely to see one come about.

The evolution of OpenGL and other 3D standards was birthed directly due to the graphics hardware evolution. The standards were created due to them being the most efficient way of programming the hardware in the given programming language. There was little need to create separate APIs that performed “start a polygon; define vertices/normals/whatever; end polygon”, aside from language differences (C versus C++, for example).

But as you state, physics is a vastly more complicated domain and there will be many many perspectives on how to do it. In graphics, we have OpenGL, DX, libgcm, and probably a few others. In physics, given that output is not “pretty pixels” there end up being a multitude of APIs. Ageia, Bullet, and ODE just tend to be the ones used when paired with 3D APIs. But there are so many factors at play when deciding which to choose.

These are all hard questions to answer, but with the introduction of hardware physics all questions of uniformity needs to be addressed, and that ultimately leads to a standard.

In summary, we’ve always had hardware physics. I don’t believe GPUs or other co-processors changes the scene so drastically that a standard must be born.