Strange interference pattern in raytracing fragment shader

Hello! I am writing a basic voxel-based raytracing program in OpenGL 4.3. For now I’m keeping it as simple as possible and the raytracing algorithm is just a small fragment shader that repeatedly calculates the next voxel location it will cross into, checks if there is a cube occupying that space, and colors the pixel black if so. When I tried to implement this, however, I get these crazy looking interference patterns on the surface of the cube, suggesting that not all the rays are properly sensing their location. Here’s a screenshot: (the site wouldn’t let me post more than one)

Here’s the fragment shader code (I don’t think that the C++ code that implements the shader is relevant in this case):

#version 430 core

in vec3 virtualScreenCoords;	// Basically the endpoints of the transformed virtual screen are
out vec4 fragColor;				// passed from the vertex shader so that I don't have to code the
							// linear interpolation myself
uniform vec3 cameraPos;

void main() {

vec3 rayDir = normalize(virtualScreenCoords - cameraPos);	// Direction of the ray
vec3 cubeStart = vec3(10.0, 3.0, 10.0);	// Start and end points of the cube to render
vec3 cubeFinish = vec3(20.0, 13.0, 20.0);

vec3 pos = cameraPos;	// Position of the 'tip' of the ray as it gets moved forward
vec3 temp;
float t;

fragColor = vec4(1.0);

for (int iii = 0; iii < 64; iii++) {

	temp.x = (rayDir.x > 0.0) ? floor(pos.x + 1.0) : ceil(pos.x - 1.0);
	temp.y = (rayDir.y > 0.0) ? floor(pos.y + 1.0) : ceil(pos.y - 1.0);
	temp.z = (rayDir.z > 0.0) ? floor(pos.z + 1.0) : ceil(pos.z - 1.0);
	// Now, temp contains all of the possible cube walls that the ray could hit next

	temp = (temp - cameraPos) / rayDir;
	t = min(temp.x, min(temp.y, temp.z));
	pos = cameraPos + t * rayDir;
	// Finds the closest wall and moves onto it

	fragColor = (
		pos.x >= cubeStart.x && pos.x <= cubeFinish.x &&
		pos.y >= cubeStart.y && pos.y <= cubeFinish.y &&
		pos.z >= cubeStart.z && pos.z <= cubeFinish.z
	) ? vec4(0.0, 0.0, 0.0, 1.0) : fragColor;	// Checks if ray is now within cube bounds



At first I assumed it was a floating point precision issue, but I tried working around that, along with pretty much everything else I can think of, and I just can’t seem to get rid of this problem. This was supposed to be the easiest part of the project and it’s driving me crazy that I can’t get it to work correctly. I’ve been stuck on this forever and if anyone more experienced could provide me with some insight, I’d greatly appreciate it!! :grinning:

P.S. I know my code isn’t very well optimized, I mainly just wrote this shader as a proof of concept and I was planning on polishing it up later.

Given the nature of the artefacts, that’s what I’d assume.

In theory, one of the three components of pos should always be an integer. But there’s no guarantee that will actually be the case. Suppose that t=min(...) selects t=temp.x, you should have:

	t = (temp.x - cameraPos.x) / rayDir.x;
	pos.x = cameraPos.x + t * rayDir.x;
	= cameraPos.x + (temp.x - cameraPos.x) / rayDir.x * rayDir.x
	= cameraPos.x + temp.x - cameraPos.x
	= temp.x

But the nature of floating-point arithmetic means that it’s likely to be off by a small amount one way or another. If it rounds down and rayDir.x is positive, the next iteration is effectively going to yield temp.x=floor(pos.x+1.0-epsilon) meaning that you’ll get temp.x=pos.x instead of temp.x=pos.x+1.

I suggest rewriting the algorithm to be more robust. This may require e.g. using a 3-way conditional so that you can force one of the components to be an integer. Or using fixed-point arithmetic to get deterministic results. The main thing is to ensure that the ray always advances at each step.

Haha yeah that was it. It seems so obvious now after you’ve pointed it out. Thank you for taking the time to reply!