I have seen two different implementation for determining when a player/oblect is inside a polygon. The first way is by creating planes perpendicular to the sides of the polygon and then testing the plane equation for each plane to find out if the point being tested is inside the bounds of the polygon (very simple floating point operations ie. no divisions etc.). The second way is by summing the angles between the point and the verticies of the polygon (dot products, normalize, and acos’s).

The second method seems much more computationaly expensive than the first, however, I have seen the second method used quite a bit in demos etc.

I guess this problem comes down to floating point operations (the second method) versus cache coherency (the first method).

Can anyone elaborate on this???

thanks

Boris