Ray Tracing

Hi,

I’m trying to use ray tracing to detect whether a ray projected from some point intersects with objects. I’ve split this problem up into two parts, one is calculating the point on a ray and the other is detectig whether or not it inersects with some object. The part I’m stuck on is calculating the point on a ray. I’ve looked at a few resources on the web, namely
and come up with the following code:


 
// origins of ray
float x_origin = 10.0, y_origin = 10.0, z_origin = 10.0; 


// end point of ray
float x_end = 1.0, y_end = 1.0, z_end = 1.0; 


// coords of point on ray
float x_point = x_initial, y_point = y_initial, z_point = x_initial; 


// distance along ray
float r = 0.0; 


for (int i = 0; i < 10; i++)
{
// calculate the point on the ray
x_point = x_origin + (x_end * r);
y_point = y_origin + (y_end * r);
z_point = z_origin + (z_end * r);

r = r - 0.05;
}

When the calculations are performed using the coordinates above (for x,y,z_origin and x,y,z_end) everything works fine. I encounter problems when I start to mess with these coordinates (e.g. swap the x,y,z_origin values with the x,y,z_end values) as the point calculated is no longer the one on the ray. I therefore made the following changes to the way that I calculate the point. Instead of only having a single calculation for each of the x,y,z coords I instead split each into four checks (which take into account the orientation and direction of the line):


 if ((x_initial > x_end) & (x_initial < 0.0))    
  x_point = x_initial + (x_end * r);
else if ((x_initial < x_end) & (x_initial < 0.0)) 
  x_point = x_initial - (x_end * r);
else if ((x_initial > x_end) & (x_initial > 0.0)) 
  x_point = x_initial - (x_end * r);
else if ((x_initial < x_end) & (x_initial > 0.0)) 
  x_point = x_initial + (x_end * r);
            
// do the same for the y,z coordinate calculations
 

This seems to now work when I swap the x,y,z_origin values with the x,y,z_end values. However, if I enter random coordinates then it no longer seems to work. Am I taking the right approach here (i.e. dividing the calculation for each coordinate into an if-else block)? If not has anyone solved this problem before?

Thanks,
Ramsey

I’m not going to give you any code here. There’s a lot of calculations involved (although they’re simple ones) and I don’t have them at the ready.

But:
you’re looking for an intersection between a line and a plane.
If you don’t treat it as such, you’ll miss a lot of intersections.

You can split this up into a couple of steps:

  1. transform the start and end points to a coordinate system of the plane (you can determine that by means of the cross product of two vectors (two sides) of the plane. Use one of the vectors as the X-axis and the crossproduct as the up-vector.
  2. you now have a line that at some point crosses 0 altitude. This is a simple formula to solve, since it’s of the type y=a*x+b (although x in this case is the square root of the square of the other two coordinates). It’s possible that “a” is 0 - in this case the line is parallel to the plane: either no intersection, or always an intersection from the edges on.
  3. the ‘x’ can still be outside the domain - it may be before the start point or after the end point. In that case: no intersection.
  4. the final step is to compare the 2d coordinates of the solution with the boundaries of the plane (since it’s likely to be a triangle, not an infinite plane). If it’s on the wrong side of any of the boundaries: no intersection.
  5. you can look at the dotproduct of the plane’s up-vector and the vector (endpoint-startpoint) or at the start and end “altitudes” to determine if it’s a hit from the front or from the back. Depending on the application, you might not be interested in a poly’s back.

Testing the 2D edges is relatively simple - it’s a matter of solving another ax+b=0 type equation.
You test which side of the 3 edges the intersection point is at.

Especially for static geometry it pays to have the transformation matrix and the edges stored for use.

Thanks for your response. What you posted will be very helpful when the time comes to writing the code to actually detect whether a point on the line intersects with some object. However, at the moment I haven’t gotten that far because I’m having problems with the preceding step, which is calculating the coordinates of a point along the ray. I forgot to post the URL of one of my sources on ray tracing, which is

http://www.codeproject.com/netcf/cfrt_article.asp

I’m still stuck with the above problem (see my first post). Can anyone help?

Thanks,
Ramsey

I must be missing something - you say you need to calculate points along the ray. What for?

Anyway, to shoot a ray from origin to endpoint,
it helps to think of it as linear movement (i.e. constant velocity) in time.

You can define the point at time t (or it seems, r in your case) as:

P(t) is the vector at time t
P(0) is the vector at time 0, which is the starttime
P(T) is the vector at time T, which is the endtime.
V is the velocity vector

V = (P(T)-P(0))/T
P(t) = P(0)+V*t

You can add/subtract vectors component by component and you can multiply each component by the same scale factor (i.e. t/T), but I presume you know that.

If you just want a fixed number of steps, just set T to 1 and let t step from 0 to 1.0 - that will simplify the formula.

By calculating the velocity from end point and startpoint there is no need for special cases.

In your code, you’re doing a couple of strange things:

You’re stepping from 0 to 10 but your r steps in 1/20th so you’ll never get further then halfway.

I don’t know how you’ve defined the endpoint (as a 3D translation from the startpoint or as the coordinates of the endpoint), so the following may be correct, but I can’t see it from your code:
You’re using a negative r. Not necessarily wrong, but it depends on whether endpoint is “positive” or “negative”

Thanks for your response. I have implemented your suggestion and everything seems to be working fine now. This is how I did it (just in case anyone has a similar problem in the future):

 

float x_begin = 10.0, y_begin = -3.0, z_begin = 1.0; // origins of ray

float x_end = -1.0, y_end = -3.0, z_end = -1.0; // end point of ray

float x_point = x_begin, y_point = y_begin, z_point = z_begin; // coords of point on ray

float x_velocity = 0.0, y_velocity = 0.0, z_velocity = 0.0; // coords of velocity vector

float t = 0.0; // time
float T = 1.0; // end time (set to 1.0 to simplify matters)  


// calculate the velocity vector. Needs to be done only
// once. This means that there is no need for special cases. 
x_velocity = (x_end - x_begin) / T;
y_velocity = (y_end - y_begin) / T;
z_velocity = (z_end - z_begin) / T;
        
for (...)
{
x_point = x_begin + (x_velocity * t);
y_point = y_begin + (y_velocity * t);
z_point = z_begin + (z_velocity * t);
t = t + 0.001
}
 

Just to clarify matters, the reason I need to calculate points along the ray is to detect whether any of the points on the line intersect any objects before the point at the end of the line is reached. Imagine you have a light source and a target object and that there is a line between them. In between the target and the source may be other objects. I need to check whether any point on the line (starting at the source and moving through to the target object) intersects with any of the objects other than the target object. If so, then I stop checking because I know that the “light” won’t reach the target object. Now that everyone knows the background to the problem (which I should have pointed out earlier) does everyone agree my approach is a good one? If not I’d be happy to listen to alternatives.
T101, can I ask you a question. It’s regarding the increments of the variable ‘t’. At the moment I have it increasing by 0.001. What would be the best combination of values to use for ‘t’ and ‘T’, given the light/target problem I have outlined above? What are the implications for these different values? Is it the case (as I believe it is) that the smaller the increase in the value of ‘t’, the greater the number of points that are checked along the line and therefore the more accurate the lighting model? Do some values work better than others? This is all bearing in mind that we are (obviously) dealing with pixel coordinates. I think these objects will end up being cylinders (gluQuadric) and polygons (not sure if this has any bearing on the answer to the previous questions).

Thanks,
Ramsey

The problem with your approach is that you’d need infinitely small steps to avoid missing an intersection with, say a sheet of paper.

But if you use plane intersections as I roughly described earlier, you’re not dependent on resolution of t.

Higher quality ray tracing is more about shooting more rays (say in half degree intervals instead of degree intervals), calculating more reflections/refractions etc.

Your method can work, but you’ll need to make sure that your steps are smaller than the minimum thickness of any and all objects in the area.

You can reduce a lot of expensive checks by means of axis-aligned bounding boxes (minX,minY,minZ / maxX,maxY,maxZ) or bounding spheres.

Using the following resource (plus countless others, but this is the main one):

RayTracer info page

I have managed to implement the intersection detection of a ray with a sphere. The problem is that, like everything else I’ve tried relating to raytracing, it’s only half working at the moment. I have gone through a heck of a lot of sites that deal in this topic and have not found the solution to my problem. I have also spent many hours going through the code and can’t see what’s wrong. Here is a quick summary of what I’m doing:

  1. Substitute the equation of the ray into the equation of the sphere

  2. You end up with a quadratic equation which can be solved (via (-b +/- (b^2 - 4ac)^0.5)/(2a))

  3. If the discriminant (i.e. b^2 - 4ac) is negative then you know that there is no intersection between the ray and the sphere

  4. Otherwise, the intersection points are the two solutions

This approach is all taken from the URL link posted above (see the ‘Sphere’ section). Here is my code:

 

//calculate a, b & c for quadratic equation   

a = pow(x_velocity, 2) + pow(y_velocity, 2) 
      + pow(z_velocity, 2); 

b = 2 * x_velocity * (x_begin - x_sphere) 
      + 2 * y_velocity * (y_begin - y_sphere) 
        + 2 * z_velocity * (z_begin - z_sphere);

c = pow(x_sphere, 2) + pow(y_sphere, 2) 
      + pow(z_sphere, 2) + pow(x_begin, 2) 
        + pow(y_begin, 2) + pow(z_begin, 2)
          + 2*(-(x_sphere*x_velocity)
            -(y_sphere*y_velocity)
              -(z_sphere*z_velocity)) 
                - pow(sphere_radius, 2);

// check the value of the discriminator, if
// negative then there is no intersection
if (  ( (pow(b, 2) - (4*a*c) ) >= 0.0)  )
{
//calculate the two intersection points
t0 = (-b + pow(pow(b, 2) - (4*a*c), 0.5))/(2*a);
t1 = (-b - pow(pow(b, 2) - (4*a*c), 0.5))/ (2*a);
		
if (t0 < t1) clostest_intersection = t0;
else closest_intersection = t1;
	
// calculate the coordinates of the 
// closest intersection point
x_int = x_begin + x_velocity*intersection;
y_int = y_begin + y_velocity*intersection;
z_int = z_begin + z_velocity*intersection;
}
else printf("NO INTERSECTION
");

 

x,y,z_begin are the start coords of the ray.
x,y,z_velocity is the velocity of the ray and equals x,y,z_end - x,y,z_begin.
x,y,z_sphere are the coords of the sphere’s centre.

Now, the above code works fine for a sphere that is positioned at the origin (with a ray starting at any point and ending at the origin, i.e. one that is travelling towards the sphere). It also works fine when the sphere has been translated to be at a position behind the starting point of the ray (i.e. the ray is travelling away from the sphere). Things seem to be going wrong when:

  1. The sphere is positioned on the ray (e.g. half way along the ray). The algorithm doesn’t pick up the intersection.

  2. Positioned either side of the ray. Sometimes it considers this an intersection, others it doesn’t (depends on the coords supplied).

  3. Positioned a little ahead (but still in line with) of the ray end point. This is considered to be an intersection.

I know that I have correctly applied the formulae manipulation because I’ve done a few test ones by hand and then compared it to the values of t0 and t1 produced in my code. I suspect that the problem is to do with the fact that I am translating the sphere to different positions, but I thought that the above algorithm should work for all sphere positions?
Can anyone spot what is going wrong with my approach?

Thanks,
Ramsey

Okay, managed to fix it. I changed the calculation for c to the following:

c = pow(x_begin - x_sphere,2) + pow(y_begin -
y_sphere,2) + pow(z_begin - z_sphere,2) -
pow(sphere_radius,2);

It now properly detects whether or not there is intersection with the sphere, regardless of where the sphere is positioned. The only remaining concern is that it still counts a sphere placed ahead of, and in line with, the ray end point as being an intersection. No idea why…

I’ll venture an answer based on your last few sentences although I’ve not gone over your code in detail.

It sounds like you want an equation that detects intersection between a line segment and a sphere. But you are talking about an equation that detects intersection between a ray and a sphere. These are not the same thing.

A ray is infinite in length and will always intersect every sphere that it is pointed at. Once you detect that your ray is intersecting the sphere it will take additional work to detect whether or not your line segment is long enough to reach the sphere along the ray.