Ray Cast / Ray Tracing

Huh , okay i have implemented the OBB / AABB interaction etc. Even had time and imported Bullet Physics into my engine and it’s very neat and cool. I am now back at the problem of how (generaly speaking) would one register a hit from a ray on a mesh in world coordinates . For example shoot a ray at a mesh and determine the world coordinates on that mesh where the ray intersects with it (the mesh).
Bullet has RayTest (which i am using , but i am not aware if Bullet has something similar to what i want , probably not cause it is not directly connected to physics but more to the graphics part)
Thanks !

Option #1: Transform the ray from eye space to object space. Calculate the intersection in object space. Transform the intersection point back to world space.
Option #2: Transform the ray from eye space to world space. Transform it from world space to object space. Calculate the interpolant of the intersection. Rather than using it to interpolate the ends of the ray in object space (yielding the intersection point in object space), use it to interpolate the ends of the ray in world space (yielding the intersection point in world space).

Or is the issue that you don’t know how to determine the intersection in any coordinate system? Calculating the intersection of a mesh with a ray boils down to calculating the intersection of each triangle with the ray and taking the closest intersection. You would typically use some kind of spatial index (e.g. bounding-box hierarchy, octree, etc) to avoid testing each triangle individually. In the case of a height-map, you’d use the fact that the vertices lie on a regular grid to optimise the process.

Very informative thanks, well i tried using the glm::raytriangle intersect function. Which returns bary centryc coordinates. Maybe my mistake was that i wasn’t performing the test in the same space. The ray was in eye space and the vertices were in object space. As far as i know i can get the desired position from the bary’s coords with something like:
vec3 result = bary.xv1 + bary.yv2 + bary.z*v3 , where v1,v2,v3 are the triangles vertices ? I may be totally wrong tho

EDIT: I am using glm::Unproject , so i believe that the ray must already be in world space

EDIT2: Sample Code just for testing



glm::vec3 out_start = Rayz->ray_start;
glm::vec3 out_end = Rayz->ray_end;
glm::vec3 out_direction = normalize(out_end - out_start);
glm::vec3 result;

for (int i = 0; i < Terrain1->getTerrainData()->getIndices().size(); i += 3)
{
v1 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 0]].position;
v2 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 1]].position;
v3 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 2]].position;

if (glm::intersectRayTriangle(out_start, out_direction, v1, v2, v3, result)){
				
glm::vec3 fresult = result.x*v1 + result.y*v2 + result.z*v3;
cout << fresult.x << " " << fresult.y << " " << fresult.z << endl;
//cout << result.x + result.y + result.z << endl;
		
}

}

[QUOTE=Asmodeus;1272155]As far as i know i can get the desired position from the bary’s coords with something like:
vec3 result = bary.xv1 + bary.yv2 + bary.z*v3 , where v1,v2,v3 are the triangles vertices ?
[/QUOTE]
Yes. The result will be in the same coordinate system as v1,v2,v3.

It will be in object space. Or more accurately, whatever space proj*model transforms from.

Yea, well that is what i am using for the ray cast. I am passing the view and projection matrices to the unproject , this means that the result ray will be in ViewProj space ? Is that what you mean ?


float mouse_x = (float)InputState::getMOUSE_X();
	float mouse_y = WINDOW_HEIGHT - (float)InputState::getMOUSE_Y();
	glm::vec4 viewport = glm::vec4(0.0f, 0.0f, WINDOW_WIDTH, WINDOW_HEIGHT);
	this->ray_start = glm::unProject(glm::vec3(mouse_x, mouse_y, 0.0f), camera->getViewMatrix(), Projection, viewport);
	this->ray_end = glm::unProject(glm::vec3(mouse_x, mouse_y, 1.0f), camera->getViewMatrix(), Projection, viewport);

Worth Mentioning that if i use the code i posted in my previous post i get somewhat correct coordinates. But not usable, obviously i am missing something

[QUOTE=Asmodeus;1272158]Yea, well that is what i am using for the ray cast. I am passing the view and projection matrices to the unproject , this means that the result ray will be in ViewProj space ? Is that what you mean ?
[/QUOTE]
The transformation pipeline conventionally looks like:

Object coordinates
[model-view matrix]
Eye coordinates
[projection matrix]
Clip coordinates
[homogeneous normalisation]
Normalised device coordinates.
[viewport transformation]
Window coordinates

glm::unProject reverses the process, resulting in object coordinates. Whichever space the combination of the supplied model-view and projection matrices transforms from, glm::unProject will transform to.

If you’re dealing with mouse input, bear in mind that mouse coordinates normally have the origin in the top-left corner with Y increasing downward, while OpenGL window coordinates have the origin in the bottom-left corner with Y increasing upward.

Then the code should be working. The terrain vertices are in object space as is the ray. So the code should be working fine , still i am getting weird results. Close but not exact

I found that the function from glm , raytriangle intersection that returns barrycentric coords does not always return correct results, maybe i am feeding it wrong information but AFAIK the sum of barry’s coords should always be = 1.0f. Well the function does return correct results MOST of the time. Also it’s strange because the code works 60-70% of the time , the return value is correct, so what’s going on here. The value of fresult from the code below is returned in object space , since i am not transforming the terrain’s vertices i havent transformed it to world.
The strange thing is that the fresult value is mostly correct but sometimes returns coords that are offset of the correct position with +/-20.0f - 30.0f in all tree axis (sometimes at the same time)


if (InputState::getMOUSE_LEFT() == 1)
		{

			glm::vec3 v1, v2, v3;
			glm::vec3 out_start = Rayz->ray_start;
			glm::vec3 out_end = Rayz->ray_end;
			glm::vec3 out_direction = normalize(out_end - out_start) * 10000.0f;
			glm::vec3 result;
		
		for (int i = 0; i < Terrain1->getTerrainData()->getIndices().size(); i += 3)
		{
			v1 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 0]].position;
			v2 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 1]].position;
			v3 = Terrain1->getTerrainData()->getVertexData()[Terrain1->getTerrainData()->getIndices()[i + 2]].position;

			if (glm::intersectRayTriangle(out_start, out_direction, v1, v2, v3, result)){
				glm::vec3 fresult = result.x*v1 + result.y*v2 + result.z*v3;
				cout << fresult.x << " " << fresult.y << " " << fresult.z << endl;
				break;
			}
		}
}

EDIT: I have solved the problem. For everyone that has problems with using the glm::intersect functions keep in mind this . The function does indeed return barycentric coordinates but the format is as follows


glm::vec3 fresult = result.x*v1 + result.y*v2 + (1.0f - result.x - result.y)*v3;

Where v1,v2,v3 are the triangle vertices and result.x and result.y are the actual barycentric coordinates but to calculate the barycentric.z you have to substract barycentric.z = (1.0f - result.x - result.y).
In other words result.z straight out of the glm::intersect is actually the parameter t below


ray_origin + t*ray_direction = result.x*v1 + result.y*v2 + (1.0f - result.x - result.y)*v3;

Both statements above are equal , it is up to you to use one of them. Both of them should return the intersection point of a ray and a triangle


ray_origin + t*ray_direction = result.x*v1 + result.y*v2 + (1.0f - result.x - result.y)*v3;