OpenGl and polys

I was noticing that all the new ideas and proposals for opengl 2 etc are all polygon based does anyone else think it would be worthwhile to discuss ray-tracing or voxel calls/methods to be included in opengl 2 ?

what does anyone else think ?

OpenGL is, and should be, an immediate mode API. Raytracing requires some sort of scenegraph-like API.

Raytracing requires some sort of scenegraph-like API.

There’s no reason that OpenGL commands couldn’t be queued up and rendered in a batch invisibly by the drivers. In fact, Kryo OpenGL drivers have to do this, as Tile-based renderers requires random access to the entire scene’s data in order to draw. Tile renderers are not immediate mode; the Kryo drivers actually start the rendering when a glFlush or glFinish happen.

I would like to point out that RayTracing does not change primitives. If you want, you can use polygons in RayTracing. However, unlike scan conversion, you have many other options for primitives too.

It is entirely possible to write a RayTracer that accepts input from OpenGL. However, nobody wants to write an implementation that does so.

hmm, I’ve thought a bit about it and a question came to my mind: why raytracing?

A few years ago arguments could be: shadows, phong shading, bumpmapping, reflections, …

Today we have almost all these things in hardware, and the image quality is not that bad. Even refraction can be faked (see paper on NVIDIA site). So I think raytracing is pretty useless for interactive rendering today, image quality is getting better and better, we can do shadows, realtime reflections, bumpmapping, …, and its all faster than raytracing (though not as accurate as raytracing of course)

And things are getting even better, new hardware with displacement mapping will appear soon (although I think displacement mapping is not very relevant - its a very useful thing for artists creating 3d models, but not as good as for interactive rendering, although it may lower the memory bandwidth)

-Lev

[This message has been edited by Lev (edited 01-09-2002).]

Today we have almost all these things in hardware, and the image quality is not that bad. Even refraction can be faked (see paper on NVIDIA site). So I think raytracing is pretty useless for interactive rendering today, image quality is getting better and better, we can do shadows, realtime reflections, bumpmapping, …, and its all faster than raytracing (though not as accurate as raytracing of course)

And it’s all just one big hack after another. Raytracing gets it right. It’s not for interactive rendering, but then again, it never has been. Raytracing is for when you want to get the answer right. Scan conversion is for when you want to get some kind of answer right now.

Sure all this stuff is a big hack, but the quality is OK for interactive rendering, which is OpenGL meant to be more or less.

Regards,
-Lev

my point was that open gl seems to be heading in the direction of polygons only and I woundered how people felt about it. I am not thinking that for open gl2 we should have ray tracing support or (insert other rendering techniqe here) but I was woundering / concerned that all the talk seems to be polygon based and other methods are possibly being neglected

There are no other methods for scan conversion. You might be able to find a way to scan convert things like spheres, but the cost of doing so is far greater than just scan converting a polygonal representation of those objects. If you’re doing scan conversion rendering, triangles are the only way to go.

I’m personally interested in Poligon Collision detection… Will I be able to gain something ( speed,… ) by using the new version?
In fact, is there any support in OpenGL for let’s say triangles intersections at least?
Wouldn’t you think that this will prove useful, not only for the game devs but also for the mechanical simulations that have to deal with lots&lots of polys?

No. OpenGL is a low-level rendering API. It does not, and should not, have facilities for doing any form of collision detection.