Detecting what you ‘shot’

Is there anyway to use a rendered scene or make the scene render in a way that you can use OpenGL to tell me what I hit? If not, what is the fastest way to test what someone hit in a scene where there are 8,000+ triangles building 100-200 objects?

That is something independent of OpenGL.

[This message has been edited by NeoTuri (edited 04-23-2001).]

I understand that, but I was wondering if anyone had come up with a technique to make it faster (possibly via openGL). For example, a posibility could be (but I don’t think a good solution) to use the stencil buffer. Anyway, I just wanted to make sure no one had any good ideas.

You can render 255 items at a time, each with its own stencil value (1-255). Then you read out the appropriate pixel from the stencil buffer, and save for later. Clear stencil and repeat until you’ve rendered all objects. Dunno if that would be any faster than a good geometric collission/picking detection algo, though. Probably not.

If the geometry is mainly static I’d create a BSP tree. Then it’s not too hard to write an efficient function to trace which poly you have hit.

This is for an engine intended for a spce game…meaning that nothing is static, that’s why I was wondering. Where could I find a good geometric collission detection algorithm for hundereds of clumps of between 20 and 200 tri’s?

Here’s an idea similar to jwatte’s only a bit better :

  1. Render your scene and swap buffers
  2. Render the same scene to the back color buffer with all textures and lighting turned off. Make each object a different color (this way you can have 16.7 million instead of just 256).
  3. Check user input - see what color pixel it hits.
  4. Clear buffer (if necessary) and goto 1

This way, too, you don’t need an extra buffer.

– Zeno

Why not encapsulate your 3d object in a series of spheres i.e like a bounding sphere
and then collision detection is not polygon to polygon, it it sphere to sphere…

Now, sphere to sphere is a lot easier
if the distance between the centre points is less than the sum of the two radii, the spheres have hit…

( hint use pythagorus’s theorum in 3d )

This means you have to do some work on your mesh data, working out the position and size of your spheres, BUT, computationally, it is less work than doing inter-polygon collision detection


dodgyposse is right on with the sphere idea. Spheres are really the simplest/fastest way to do collision detection. Ex:
Sphere1: center(x1,y1,z1) radius R1
Sphere2: center(x2,y2,z2) radius R2

if sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) is greater than (R1 + R2) you dont have a collision. Otherwise you have a potential collision. Some games leave it at that and say a “potential collision” is an “actual collision”. Otherwise you can do further testing.

The best bet is to do hierarchial collision detection. Say you have a bunch of objects that always stay close together. Make a bounding sphere encompassing all of them and see if there is a potential collision with the group. If not you are done. If so, then do a bounding sphere test with each object in the group. If you find a potential collision with an individual object, you can refine the test further. If the object is made of 5000 polys, you can group it into sections (for a ship, you could have left wing, right wing, cockpit, mid body, & rear body) and test each individual section. Eventually, you either have to accept that some level of shpere test is accurate enough, or break down and do a poly-to-poly or poly-to-ray test. Game Programming Gems has a nice poly-to-poly algo in it that is pretty good.

A popular optimization on the above distance formula:
if sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) > (R1 + R2)
Since distance will always be positive, and since multiplication is a lot faster than a square root, you should change this to
if ((x2-x1)^2 + (y2-y1)^2 + (z-z1)^2) > (R1 + R2)^2

Of course, you can also use the openGL selction buffer (kinda what you were originally asking about) but I would avoid it like the plague. Not very good, I dont think its hardware accellerated, and even if it were it still wouldnt likely be faster than the hierarchial bounding sphere test.

Remember that it might be possible that what you ‘shot’ is not on the screen - it might be off-screen (if it’s a heat-seeking missle for example) or behind another object. This would make the ‘rendered screen’ option useless.

Originally posted by IainR:
Remember that it might be possible that what you ‘shot’ is not on the screen - it might be off-screen (if it’s a heat-seeking missle for example) or behind another object. This would make the ‘rendered screen’ option useless.

Well, all you have to do is render to an offscreen buffer from the point of view of the object that is going to “cause” the collision (the bullet, missle, laser gun, etc). Of course you have to allow a little bit of leway no matter how you do it. You detect the collision in the current frame, in which case the collision can only occur when it HITS the object, in which case the distance would be zero. Since you cant have true “0” for the near clip plane, the collided object would not be rendered, thus you would miss it. To fix this you can calculate what it is about to hit, in which case if the object is moving fast enough, it could actually get out of the way and you would have a false hit. This isnt a problem for “instantaneous” weapons (like a laser), however the idea of “instantaneous” is in itself leway (since nothing known, not even light, can travel at infinite velocity).

None the less, you should not use the graphics API to help figure collision detection.