# Question about ray casting example

I found the following voxel ray casting example, explained in the first answer. I’ve tried to understand the code but I’m not sure I quite understand it and I hope that someone could shed me some light on it.

https://stackoverflow.com/questions/48090782/how-to-best-write-a-voxel-engine-in-c-with-performance-in-mind

As I understand, correct me if I’m wrong:

We create a QUAD with 4 vertices that should cover the screen, that contains the fragments that we should draw or not draw based on the 3D texture content (the voxel positions).

What I’m not getting is the vertex shader. As I understand, it should provide a ray with starting position and direction. Then we use this ray in the fragment shader. But what should we send to the vertex shader? Shouldn’t we have a ray + direction for each pixel on the screen that we want to draw? But does that mean that we need to provide an array of vertices for each pixel on the screen?

So my understanding of this algorithm:

1. Create VBO with array of vertices for each pixel on screen.
2. Create a QUAD covering the whole screen that we use to draw fragments.
3. Create a 3D texture for each voxel positions + colors
4. Ray cast from each pixel towards the 3D texture (for each side of the voxels).

I see your confusion. The important thing to keep in mind is that you’re not sending vertices down the pipeline for each pixel (each ray). You’re only sending 4 vertices down the pipe, one for each corner of that full-screen QUAD. And the vertex shader only executes on each one of those 4 vertices.

So you’re not dealing with individual per-pixel rays at that point. That happens at the fragment level.

Let’s look at the vertex shader (with a little formatting):

#version 420 core

uniform float  aspect;
uniform float  focal_length;
uniform mat4x4 tm_eye;

layout(location=0) in vec2 pos;

out smooth vec3 ray_pos;    // ray start position
out smooth vec3 ray_dir;    // ray start direction

void main(void)
{
vec4 p;

// perspective projection
p           = tm_eye * vec4( pos.x/aspect, pos.y, 0.0, 1.0 );
ray_pos     = p.xyz;
p          -= tm_eye * vec4( 0.0, 0.0, -focal_length, 1.0 );
ray_dir     = normalize( p.xyz );

gl_Position = vec4( pos, 0.0, 1.0 );
}

What’s it do?

First, each of the 4 vertex shader executions just copies its input quad vertex position directly from the pos input (vertex attribute value) to the gl_Position output. This positions the quad vertices in screen space so that they cover the screen.

Then the rest of the vertex shader just computes a ray start position and a ray direction (ray_pos and ray_dir). Each of the 4 executions of this vertex shader computes a ray_pos and ray_dir for its corner of the full-screen quad. They each then send their values to the fragment shader via an output interpolator (ray_pos and ray_dir).

Collectively, these 4 ray start positions and 4 ray direction vectors (1 pair for each of the vertex shader executions) are then smoothly interpolated across the full-screen quad so that each fragment shader receives as input the correct ray start position and ray direction for that specific pixel.

1 Like

Thanks for the explanation, it makes sense now!

I will continue to elaborate a bit with it to really get a grip of how it works

This topic was automatically closed 183 days after the last reply. New replies are no longer allowed.