hitTest

Originally posted by zeckensack:
Waiter!
Can we get some non-pointless threads in here?

Maybe we should have a sticky thread on top named “No collision detection, audio, file loading, raytracing and scenegraph suggestions please”.

[This message has been edited by GeLeTo (edited 11-25-2002).]

may be we shouldn’t

The topic is hitTesting so you should expect 3d graphing problems, not rendering.

Originally posted by GeLeTo:
[b] [quote]Originally posted by zeckensack:
Waiter!
Can we get some non-pointless threads in here?

Maybe we should have a sticky thread on top named “No collision detection, audio, file loading, raytracing and scenegraph suggestions please”.

[This message has been edited by GeLeTo (edited 11-25-2002).][/b][/QUOTE]

Coolision detection is the topic of this thing so if you don’t like it don’t post here!

Although I do agree that there should be more constructive replies.

Originally posted by vladk:
[b] The hittesting is very application specific. It depends on what are your objects, what is their geometry. For example, your sphere has 700 polygons, and it has to move about a world with +5000 polygons. How fast you think you can have your engine if it were to test each of sphere.700 polygons against world.5000 polygons?

In this case, you can use angle/radius combination to calculate wether the sphere entered an object (not a polygon but another object).

Having more complex objects demands complex algorithms, but it can all be optimized.

Fake it 'till you make it!

Therefore, there is NO generic way to calculate hittest. It depends on object geometry, and in order for OpenGL to understand an “object” it would have to be something similar to a neural network that, by experience, learns how each object performs. It would also need a high-order persistent memory that actually does the learning. It would also need to know how specific objects move in space, so you would have to teach it some basic physics…
When I mean “teach” I don’t mean hardware implementation, but real-time learning of your OpenGL renderer… in which case you will have to keep your computer turned on forever, or use some hologramic memory…
)[/b]

What I thought of is for doing a spherical hitTesting and then after that test all the polygons inside the sphire.

Hey I just became a frequent contributor!

Great. Spam your way up the ladder, will get you really far.

Btw, I don’t care what the topic of this thread is. The topic of this section is if I may quote “suggestions for the next release of OpenGL” and not “suggestions for stuff somebody else should code and pack into a library for me because I’m lazy but still wan’t to be a rockstar programmer”.

We’ve had this a thousand times all over and is, it was, and it always will be just that: pointless.

This is kinda a personal preference, but it’d be really neat if they could release a C and a C++ version of OpenGL. I mean sometimes i kinda get tired of C calls when i’m working in C++…I’d really like to see objects that encapsulate a certain amount of functionality. that is possibly the ONLY THING that directx has going for it that OpenGL doesn’t right now. I mean i’m sure there are other libraries out there than do what i’m saying…but it’d be nice to see it from the creators. But it’s a personal coding style thing.

                            - Halcyon

what do you mean by objects? there are no objects in GL as long as you don’t make them so how is it a GL fault?

Hey, guys!

Please, bear in mind that OpenGL is a low level graphics engine. Graphics accelerators acctually have certain chips that respond to OpenGL calls. That means primitive, but very fast per-vertex rendering.

Wrapping vertices into objects is too abstract to have accelerators do it. Having OpenGL use a C++ library is not a problem for OpenGL standards but for enthusiastic programmers who wish to wrap up their code, but even then it is very application specific. And don’t forget that C++ objects are somewhat slower than using function call parametric reference.

Sorry about that guys! I’m only a college student with a mediocre knowledge of programming concepts. When i was checking out which API to learn (D3D vs OGL), one of the things i jsut thought was neat was the encapsulation of the lower level functions.
After thinking about it awhile, i guess c would be better. You could probably just build your own classes.

And don’t forget that C++ objects are somewhat slower than using function call parametric reference.

Not true. The only time class members calls are slower than regular calls is with virtual functions. And, even then, it’s just a pointer dereference. At most, you’ll be calling 1000 per frame; hardly a significant impact for most modern computers.

Not true. The only time class members calls are slower than regular calls is with virtual functions. And, even then, it’s just a pointer dereference. At most, you’ll be calling 1000 per frame; hardly a significant impact for most modern computers.

Ah, sorry Korval, but I do not agree!

When you use a class and instantiate an object, it would be only efficient if you wrapped it about some behaviour concept. Then, you have to call instances and their high-level functions. Take this for example:

glEnable(GL_LIGHT2);
glBegin(GL_TRIANGLE);

glEnd();

versus:

Sphere->RelativeToLight(2);
Sphere->RenderSelf();

C++ objects are placed in RAM, and all switches and state variables would have to be stored in that objects in order for them to be efficient. But, OpenGL states and switches are to some extend placed on HW, so calling a simple glEnable(GL_LIGHT1) will actually set a state variable within HW, while a class will also have to reflect the change in its own private variable…

I mean the concept of classes and instances makes things slower than using simple calls, otherwise classes would be obsolete, in this situation.

Originally posted by Korval:
Not true. The only time class members calls are slower than regular calls is with virtual functions. And, even then, it’s just a pointer dereference. At most, you’ll be calling 1000 per frame; hardly a significant impact for most modern computers.

Not necessarily true. Virtual functions are a run-time thing. So the compiler cannot inline the function at compile time. You might also take a cache hit when dereferencing that pointer.

However both points above are micro-optimization things. Worry about how fast your algorithm is in the first place. And as Korval states, it is still hardly significant.

That being said I use virtual functions quite a lot for my higher-level classes. I just like being able to understand what the compiler and underlying architecture need to deal with.

When you use a class and instantiate an object, it would be only efficient if you wrapped it about some behaviour concept. Then, you have to call instances and their high-level functions.

No, you’re assuming a particular style of programming. Just using OOP does not entail this:

Sphere->RelativeToLight(2);
Sphere->RenderSelf();

It would be more likely to be organised as this:

GraphicsSystem->GetLightModel()->ActiveLight(GL_LIGHT2);
GraphicsSystem->BeginPrimitive(GL_TRIANGLE);

GraphicsSystem->EndPrimitive();

C++ or OOP in terms of graphics does not mean that the object represents a renderable object. Since a C++ version of OpenGL would not want to impose a rendering engine (ie, it would still be a low-level renderer), there would not be a class Sphere or any other renderable object of that nature.

Virtual functions are a run-time thing. So the compiler cannot inline the function at compile time.

So? Unless you put the implementation in a header file, the compiler can’t inline it either. Besides, a lot of the functions that you would want virtualized are too large to inline.

Also, if the compiler is 100% sure which instance is being used (say, you declare a class on the stack and use it without putting it through a polymorphic type), then it can inline it.

You might also take a cache hit when dereferencing that pointer.

True, although I think you mean “cache miss”, not a hit. If it were a hit, then the access takes only 1 cycle.

Of course, you correctly point out that you’d probably get more out of optimizing the algorithm than nitpicking at virtual functions.